international online community for dna barcoding professionals

Previewing a software solution for barcode pipelines: the Geneious Biocode Plugin (Webinar 24 June 2010)

Have further questions from Chris Meyer's webinar preview of the Geneious Biocode Plugin? Respond here and let the discussion continue!

Views: 119

Replies to This Discussion

Question from the webinar:

Has Biomatters created a pricing system for barcoding groups? Is it an upfront purchase and/or annual licenses?
01:35:10 of Webinar

Biomatters has a variety of pricing options for individuals, students and academic institutions, including 12 month and perpetual licenses. purchasing options Check that link and others at their website:
They also offer quotes for various purposes such as teaching labs. Currently, there are no special pricing options for barcoding in particular.
Question from the webinar:

What labs are testing the system now?
01:28:50 of Webinar

To date, the PlugIN has been tested mostly by Biocode partner institutions that include the Smithsonian's LAB, a number of labs at UC Berkeley (MVZ, etc.), FLMNH, etc. After the Mexico barcoding meeting a few folks expressed interest and they are beginning to play with the system as well - Jonathan Deeds at USDA and Andrew Mitchell's lab in Australia.

Part of the reason for getting this Webinar out to everyone is to build a broader user group to help create a better product.
Question from the webinar:

How do you handle sharing access among people and labs who are working on the same collections?
01:27:35 of Webinar

Geneious offers multiple ways to share data across collaborators and more information can be found within their help functions and tutorials. While they do have a "collaborators" button on the service panel - the best way to share the same data and keep annotations current and shared is to be writing to a shared server (use the server button on the same panel). You'll have to get someone at your lab/institution to set this up with appropriate permissions and passwords, etc.
Question from the webinar:

I noticed that you do the editing (and tree) before blast searches - we had been thinking about doing blast searches first for identifying problematic results to save work later - but maybe your experience tells otherwise?
01:20:50 of Webinar

To me this is a bit of a toss up as a BLAST of any sort is only going to be as good as your reference dataset. Of course a tree will be helpful if you have conspecifics or closely related species to your target. It's all contextually dependent. Before doing too much annotation and cleaning up of the assemblies, I usually run a quick alignment and tree just to get a bird's eye view of where the data are heading.
Question from the webinar:

Can you export the files and work on your manually modified chromatogram files etc. in other programs? Are the manipulated chromats saved as separate files and do they receive meaningful names? I'm worried about this since we're dealing with data that deserve long-time archiving (the manipulations of the raw data constitute hypotheses).
01:30:50 of Webinar

All annotations are tracked and can be deleted and the raw data recovered - this is a nice feature about Geneious that we really liked (trimmed portions are not deleted, subjectively called bases are marked, etc. and all this reported). We have talked with Genbank about explicitly capturing subjective calls. This would be for calls beyond what we are calling standard workflows (our trimming, binning and verification settings - which are also captured). As far as names - once linked with the specimen data either through a spreadsheet or service, traces can be renamed with any of the fields (but this is permanent, so would be good to establish a system). We have talked about creating GUIDS for the raw outputs (traces).
Question from the webinar:

Hello, I was much interested how the referencing to TAPIR (DIGIR, ABCD) specimen provided data is/can be accomplished. You didn't really touch that.
01:23:30 of Webinar

I had put a slide with more detailed information about this at the end of my presentation - check out that time frame. Moreover, we intend to create a similar tutorial video demonstrating how to do this and will let everyone know when it gets posted - likely right here.

For TAPIR details, you can get started here:

For GBIF Integrated Publishing Toolkit (IPT)
details, You can get started here:

Once you install and go through the installation process,
you can add in this extension:

TapirLink is a generic TAPIR provider software based on the PHP DiGIR provider (note: TapirLink is only compatible with TAPIR, not DiGIR?). It uses the PHP ADOdb library to access different types of relational databases. A single instance of TapirLink can provide access to multiple TAPIR resources, each one with its own service address. TapirLink allows each resource to map one or more data abstraction layers. Search responses can be returned in many different formats, although only tabular (denormalized) data can be served.

The GBIF IPT is an open source, Java (TM) based web application that connects and serves three types of biodiversity data: taxon primary occurrence data, taxon checklists and general resource metadata. The data registered in a GBIF IPT instance is connected to the GBIF distributed network and made available for public consultation and use.



Tory's site-wide code

New to the Connect network?

Watch our Intro Webinar

Introduce yourself to the Connect community

Write a blog post

Ask a question

Tory's code

© 2014   Created by Mike Trizna.   Powered by

Badges  |  Report an Issue  |  Terms of Service