Skip to content
This repository has been archived by the owner on Mar 19, 2020. It is now read-only.

Sunset: Picking a Production Platform Successor #22

Open
josiahseaman opened this issue Oct 15, 2019 · 2 comments
Open

Sunset: Picking a Production Platform Successor #22

josiahseaman opened this issue Oct 15, 2019 · 2 comments
Assignees
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@josiahseaman
Copy link
Member

josiahseaman commented Oct 15, 2019

This is a second generation version of Issue: Decide on Technology Stack. Now our requirements have changed significantly. For production, we should start a new project that will replace the ugly prototype that is MatrixTubemap. For starters we will:

  • Run entirely off of static files pre-generated. No complicated server setup, no Django
  • Render Matrix as a fundamentally rectangular object (pixel canvas)
  • Use a linear rendering of annotation (UCSC, JBrowse, IGV, etc.)
  • Have a single pangenome reference frame to align to

Jobs to Fill

  1. Load Graph, Sort, Link: ODGI (C++ with Python Bindings)
  2. Convert to our static format: Tubeify (Should still work here)
  3. Render queries to Pixels
    • SPARQL endpoint: ?
    • FluentDNA: creates static zoom stack
  4. Browsing Canvas: OpenSeadragon or chromozoom.org (either will need an extra zoom dimension implemented OSD has better image plugins, chromozoom has user tracks)
  5. Annotation Stretch to Pangenome coordinates: not yet written
  6. Annotation Renderer: (UCSC, JBrowse, JBrowse2, IGV, etc.)
@josiahseaman josiahseaman added documentation Improvements or additions to documentation question Further information is requested labels Oct 15, 2019
@josiahseaman josiahseaman self-assigned this Oct 15, 2019
@ekg
Copy link

ekg commented Oct 15, 2019

This looks good to me.

Could we directly go from odgi to the pixel format? This could work based on python bindings.

@josiahseaman
Copy link
Member Author

josiahseaman commented Oct 18, 2019

Erik, my inclination is certainly to go directly to pixels because it's so straightforward. My only concern is that we might eventually want 2-3 parallel images with different color info. We can color progress and inversions in one image, but we can't also pack in copy number, row and haplotype into the same color space. However, progress and inversions coloring is a strong first use case, and haplotype is better served with row sorting. Row coloring is only a visual assist which can be replaced with good mouse UX. That leaves copy number, which is rare so maybe we can smuggle it in or just have the one alternative color image.

I just wrote a lengthy post describing options for a front end platform: OpenSeadragon #277

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants