The following workshops have been accepted to the All Hands Conference 2007 and have now issued calls.
- Workshop 1: OMII UK
Oragniser: June Finch (email@example.com)
Call for papers
OMII-UK was formed in January 2006 as a partnership between the existing OMII activity at the University of Southampton , and software activities at the University of Manchester within the myGrid project and the OGSA-DAI project at the National e Science Centre and EPCC. OMII-UK aims to provide software and support to enable a sustained future for the UK e-Science community and its international collaborators - www.omii.ac.uk.
The OMII-UK collaboration delivers software (e-infrastructure services and tools) that enables researchers to start using similarly enabled remote data and compute resources to support their work. These components can be downloaded individually directly from the development websites, our own repository, or as a packaged and integrated release. It is our vision that these components will provide working solutions to the community rather than toolkits that need further integration effort.
Additionally, OMII-UK's activities cover providing support and training around the offered software; together with a route for the escience community to publicise their projects and their software outcomes, and providing funding, via the Commissioned Software Programme to address solution "gaps" as identified by the escience community.
In this workshop, we present a selection of recently funded Commissioned Software Programme Projects, together with an opportunity for all attendees to see how OMII-UK can benefit them. We also launch the
OMII-UK Open Forum and invite the attendees to take the opportunity to help shape OMII-UK's activities.
- Workshop 2: Building Usable Systems for the Global Environment
Oragniser: Dr. Nicholas A Walton (firstname.lastname@example.org)
Call for papers
A number of major UK projects are are now deploying systems for operational and scientific use. These projects are generally aimed at providing a complete solution for their specific domains providing capabilities to meet the user science demands of their communities.
Further these UK projects exist in a global environment, where access and interoperability with similar domain projects and implementations in other countries or regions is vital to ensure full scientific functionality for their UK end science user base.
At AHM2006 a BoF was held to explore the key issues confronting these major projects - see http://www.allhands.org.uk/2006/programme/BoFs/usystems.html. It was apparent that there were a number of areas where the varying projects were having to grapple with similar problems, and in some areas developing similar solutions. This workshop provides examples of sharing of techniques and solutions across domains helping to ensure that their systems are 'Usable' to their communities.
Specific topics covered include:
The workshop closes with an open discussion session.
- Definition of domain specific interoperability standards, how communities have organised, and agreed high level interoperability standards, against which the projects develop functional implementations.
- Approaches taken by domain specific projects in determining which services they should develop and provide internally, and which should be developed and/or provided by the base e-infrastructure projects. Which extra challenges are presented when having to interface to the wider world.
- Use of common cross domain standards and middle-ware components.
- System design to allow transparent end user access to partner project data and applications, both UK based and across national boundaries.
- Design and operation of usable and sustainable systems to allow for secure and authorised inter-operation between UK eScience systems and global partner projects.
- Examples of scientific usage of UK projects where specific access to partner project data and applications has been facilitated by use of interoperability standards.
- Consideration of deployment of the usable system, addressing the interface to both UK and non-UK production grid resources.
- Workshop 3: Visualisation Tools
Oragnisers: David Duke (email@example.com
) and Ian Grimstead (I.J.Grimstead@cs.cardiff.ac.uk)
This workshop is intended to bring together end-users (and potential users) of visualization, and the community involved in developing new visualization systems and tools. It will provide an opportunity for
end-users to learn how visualization systems can be used and how they are evolving, and for researchers to get feedback on users' views of these new directions.
In the first part, "visualization today", Lakshmi Sastry (Rutherford) and Helen Wright (Hull) will present talks on how visualization and computational steering have been and can be applied in a number of
challenging application domains, including large-scale integrated biology. The second part "visualization tomorrow", consists of four shorter presentations, setting out new directions in both fundamental
research and the technologies for delivering visualization, in particular over grids. Following the talks there will be a brief discussion session.
- Workshop 4: Issues in Ontology Development and Use
Organiser: Rob Procter (firstname.lastname@example.org)
Call for papers
Ontologies are reaching a critical phase in their development as they begin to be taken up and used seriously within different fields of research. Preliminary investigations of their development and use suggest that the e-Science community is now stumbling across a whole range of issues which first surfaced within software engineering twenty (or more) years ago.
The aim of this workshop is bring together ontology users, domain experts, ontology developers and tool developers, members of the ontology standards, and library and information sciences communities to share experiences of this key technology for e-Science and explore how problems that are beginning to emerge might be addressed. Contributions were sought on topics such as:
- methods and tools for knowledge elicitation and representation, including user participatory approaches
- managing change in requirements
- designing for scalability and re-use
- standards and guidelines for maintainability
- ontology languages, tools and infrastructure
- socio-political dynamics arising from the interplay between the increasing formalisation of shared knowledge and the practices of scientific research.
- Workshop 5: VREs: Where are we now?
Oragnisers: Marina Jirotka, Annamaria Carusi, Anne Trefethen
Call for papers
Virtual Research Environments (VREs) have been at the cutting edge of embedding the e-science vision across the academy. This has meant allowing disciplines and communities to appropriate the vision and to define it for themselves. This has resulted in a medley of diverse applications ranging from Arts and Humanities to experimental sciences. As the second round of VRE projects get off the ground, this is a good time to consider where VREs are going and the shapes they are taking.
The workshop will start off with an overview of the JISC VRE and VRE2 programme, and a and a brief description of some of the projects that have been funded under the programme, including Political Discourse, Ancient Documents, Dance within a Collaborative Stereoscopic Access Grid Environment, Archeology (VERA), natural sciences (MyExperiment) and general collaborative events (CREW). We will then go on to a roundtable discussion in which project members and others involved with the JISC programme will be invited to address central questions regarding the challenges of VREs:
- what is the nature of the virtual collaboration in research?
- how are VREs changing research practices and the institutions?
- how do you see the understanding of a VRE evolving over time?
- what is the relation/cross-over between VREs and virtual organisations and virtual communities?
The roundtable discussion will be extended to audience members.
The event will include:
- Helen Bailey, Martin Turner, e-Dancing (VRE1 & AHRC/EPSRC/JISC e-Science Initiative), University of Bedfordshire and Univiersity of Manchester
- Mark Baker, Archaeology / VERA (VRE2), University of Reading
- Mike Daw, Collaborative Events / CREW (VRE2), University of Manchester
- Dave de Roure, Sciences / MyExperiment (VRE2), University of Southampton
- Simon Hodson, History of Political Discourse (VRE1), University of Hull
- Ruth Kirkham and John Pybus, Ancient Documents (VRE2), Oxford University
- Frédérique van Till, Programme manager VRE, JISC
- Matthew Dovey, Programme director e-Research, JISC
- Alexander Voss, NCeSS, Edinburgh
- Workshop 6: We have to Talk about Metadata
Organiser: Neil Chue Hong (N.ChueHong@epcc.ed.ac.uk)
Call for papers
The Grid provides us with the ability to create a vastly different model of data integration and management allowing support for dynamic, late-binding access to distributed, heterogenous data resources; diverse replication capabilities; and integration of legacy storage systems. However the opportunities to exploit these new methods also produce many issues and open questions. In particular, different attitudes and misconceptions towards the management of metadata abound and it can be difficult to identify clear strategies and tools for the handling of metadata at many different levels, from service discovery to semantic integration.
One thing that is clear is that many people face these issues, but there is little in the way of dialogue or best practice between communities. We are not talking enough about metadata.
This workshop will bring together eScience developers and end-users to present and disseminate the current status of approaches to metadata handling, examining commonalities at the technology, interface and schema level, and ensure best practice is recorded and disseminated to the eScience community.
Following this, there will be an open discussion around the following topics, with the aim of drawing out the important issues currently facing the community as a whole:
- Common requirements on metadata to support data integration
- Authentication and Authorisation issues pertaining to metadata
- Experiences of transferring approaches for metadata handling between communities
- The current challenges in managing metadata
- Tracking of provenance in the context of metadata
- Approaches of binding metadata to data
- Novel approaches to managing metadata, e.g. using RESTful interfaces
- Architectural issues preventing seamless management of data and metadata
- Use of Web Services / Semantic Web standards to assist distributed metadata management
- Currently available solutions and success stories
- Review of missing tools and technologies
The results of the discussion will be published as a short technical report after the workshop.
- Workshop 7: Text and Grid: Research Questions for the Humanities, Sciences and Industry
Dr. Tobias Blanke (email@example.com) and Dr. Stuart Dunn (firstname.lastname@example.org)
Call for papers
This workshop represents a crucial opportunity to explore in depth the implications of e-science for textual analysis in academia and industry. Four papers by researchers at the very cutting edge the field will explore these issues from four very distinct, yet complimentary perspectives. . It will consist of the presentations, an open discussion, and a summing up by AHeSSC staff.
The presenters are:
- Loretta Auvil, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign
- Hamish Cunningham, GATE project, University of Sheffield
- H. Paijmans and S. Wubben, Tilburg University
- Gregory Crane, Tufts University and Dolores Iorrizo, Imperial College
Although a range of common themes are expected to emerge from the papers and the discussion, key strands that the workshop will address include the importance of service oriented computing; the ontological relationship between text, semantics and knowledge; infrastructure needs and the social, political and strategic implications of the current floruit of Web 2.0.
Textual resources play a pivotal role not only in research, but also in business. In 2003 alone, 300 Terabytes of textual data were produced, without counting more dynamic texts like blogs, wikis, websites, etc. Google, Microsoft and Yahoo are all working on creating gigantic digital libraries for textual resources that would both be more accessible and comprehensible than any other digital library in history. Project partners in "Cultural Heritage Language Technologies" like the Perseus Project, which will be represented at the workshop by Crane, promote the use of modern computational and storage techniques to integrate tools and data for research on and with texts in different formats. In the UK , the AHRC E-Science Scoping Study expert seminars in textual studies, linguistics and history have discussed the potential of Virtual Organisation and Grid technologies for humanist textual analysis. The crucial importance of service architectures and infrastructure will be highlighted, and strong disciplinary case study based on archaeological texts highlighted.
The workshop is likely to be of interest to any researcher or developer in any field, who has any interest in the use, analysis, processing, storage or retrieval of text.
More information, including titles of the papers, will be available shortly at http://www.ahessc.ac.uk/allhands.
Workshop 8: Interoperability and adaptability of text mining tools
Organiser: William Black (email@example.com)
Call for papers
Text mining goes beyond information retrieval by analyzing texts above the ‘bag of words' representation from which search engines normally index documents. Applying text mining in the e-science community involves choosing from an increasing variety of ways in which to analyze documents, annotate them with metadata, index them and mine the indexes for associations between annotated text elements and higher-level constructs. To benefit from community accomplishments in text mining technology, two key features of analysis tools are interoperability and adaptability.
Interoperability allows tools from one source to be composed with tools from other sources. This allows both substitution of tools to find the fittest alternatives for a given level of analysis within a pipeline, and also allows higher-level analyses to build on established analysis tools without reinventing the wheel. Several software frameworks (again sometimes competing) are in place to enable interoperability. Their key components are a workflow management system in which software modules' capabilities are described, and composed together, relying on a common invocation API., and a common document representation framework realized as a repository of annotations. The latter is a basic requirement, but because the metalanguage of annotations is usually pitched at a level of generality that makes almost no ontological commitments, it remains a challenge to have tools written from within different analysis paradigms integrated to best advantage. The utilization of third-party tools integrated into frameworks often remains at a low level of analysis, and fails to exploit the full potential of richer analysis tools.
Adaptability in a tool allows it to be readily deployed in different ways from its original genesis, most typically in a different domain of application, or perhaps in a different language. Individual components of text mining analysis pipelines are often constructed by adaptive techniques such as supervised and unsupervised learning, but others are crafted intellectually using linguistic and domain knowledge. In both cases, obtaining the empirical evidence or data for development is a significant challenge, and standard approaches to resource construction involve considerable effort in text annotation for no purpose other than resource development. In other words it requires the scientist to accept considerable deferred gratification for efforts that do not in themselves lead to a ‘result'. The adaptability challenge in TM is to develop modes of analysis that generate resources for further analysis as a by-product of their deployment.
- The paper by Rupp et al looks at the interoperability question from the perspective of the SciBorg project, in which analysis tools that address the same level of linguistic analysis are deployed in co-operation on a corpus of Chemistry papers.
- The presentation by Scott et al discusses issues in the design of a multi-level text analysis system when the decision has been taken to adopt the Apache (originally IBM) Unstructured Information Management Architecture, using the BOOTStrep project as a case study.
- Lewin's paper is concerned with a single text analysis component, one that recovers the document structure of scientific papers in a given domain, using journal-specific rules learned by the Transformation Based Learning algorithm.