-
Notifications
You must be signed in to change notification settings - Fork 23
Backlog
- additions to search facets
- Did you mean (DYM) and People interested in this data set were also interested in... features.
- present a report to custodians "your data was found by people using (this list of terms )"
- other improvements
- Solr, the underlying search engine, can be configured with loglevel
INFO
. With this log level every query is logged with metrics such as, response time, number of documents examined, number of successful hist. It is a standard practice to collect this information for analysis. With this information we can "learn what people are finding" and more importantly "what they are not finding" (which queries Zero results) - Tooling to collect, and analyzed query metrics can be used to improve find-ability.
- It can be used to inform custodians on what terms were used to find their data
- with analysis we can learn common terms that miss, determine what they should correspond to, add these as aliases and keywords to improve find-ability.
- with analysis we can tune queries: boost values of certain components of the query to return better quality results
ckanext-discovery may be a good candidate to add to our installation.
This extension provides multiple plugins that make it easier for your users to discover your data:
- search_suggestions: Provides real-time search suggestions in a drop-down box
- similar_datasets: Adds a list of similar datasets to the dataset detail view
- solr_query_config: Allows you to easily override the parameters that CKAN passes to Solr
- tag_cloud: Replaces the list of frequent tags on the home page with a tag cloud that shows the most popular tags scaled according to their popularity
Improvement of UI/UX on Metadata Record View and Resource View.
- additions of links to external services (WMS, etc.),
- specific resource information for specific types of resources,
- removal or de-emphasis of some presented information
Design and build integrated services which collect and present operational intelligence on data currency and activeness. Work with infrastructure / DA on a service to log and share data currency and update info via the catalogue
This is potential large. many system write logs. I have an idea:
- collect logs, unstructured time series data,
- transform it into structured data (with Logstash or some other tool),
- load it into some nosql system - i'm thinking ElasticSearch
- create visualization with Elk.
I'm not attached to what stack we use. I see a gap. We need to be better informed of our systems, how they perform, when they fall over or hickup.
Integrate with
- BCEID
- 3rd party authentication services such as Facebook, Google, AWS IAM, Twitter, GitHub, Instagram.
The motivation for this is to allow other users to access and contribute data. Potentially municipalities could add data to the catalogue if their users could be authenticated.
- SDE,
- ArcMap,
- iMapBC (a layer visualization tool developed by the Ministry),
- Hectares BC (a Web-accessible geographic analysis tool developed by the Ministry),
- DataBC Mashup Framework (a dataset combination and Web mapping tool developed by the Ministry),
- the BC Geographic Warehouse (an Oracle repository of BC Government spatial datasets),
- and provisioning of CSW.
Authoring (Metadata Management) enhancements:
- addition of validation,
- auto-completion, and
- UI/UX improvements
We need to perform an assessment of the user experience. I hypothesize that we have 4 groups of users:
- new editors
- experience editors
- experienced users of the catalogue - people looking for, finding, and using data
- inexperienced users
- somehow to some user testing with-in the catalogue 101
- Hackathons - be prepared and gather info when our data is being used.
- solicit is from users - reach out - gather a list of recently logged in users, contact them, call them.
The integration between MPCM, BCGW, DWDS, AGOL and application can benefit from this. More detail needed here.
more detail needed.
more detail needed.