|In 1982 I completed my M.S.L.I.S. thesis at the University of Illinois at Urbana-Champaign on “Information Needs and Information-Gathering Behavior of Research Engineers.” After receiving my degree, I worked for 14 years with engineers and scientists. Since then I have kept up with the profession through electronic lists, blogs and personal networking. What lessons have I learned? First, when faced with a problem, engineers will first go to their own resources (notes, private library, etc.), then to a colleague, and to the librarian only as a last resort. A good, proactive librarian can decrease the time “wasted” before coming to the library. Practice “reference by walking around.” Second, how do you teach them to use online or print resources? Make use of “the teachable moment” instruction at the point of need. Third, if you “Feed them and they will come.” Especially with younger engineers, providing food will almost ensure that they will come to an open house and greatly increase their attendance and instruction sessions. And fourth, to encourage them to use the library offer more than standard library services. Provide maps, computer magazines, comfortable places to read or work, and chocolate. Even if you can't change the way engineers and scientists think, you can change the way they think about information, libraries, and librarians.|
|It's becoming increasingly evident that people engaged in applied engineering have significantly different needs from researchers and information consumers in other disciplines, such as health. Public search engines, like Google, are still popular destinations for gathering general information or surveying the landscape, but more often, the engineer's demands center on fast and reliable access to highly specialized and specific data. This information typically does not reside in journals, but is more likely found in reference texts, handbooks and databases. Engineers spend upwards of 20% of their time in Excel! Giving engineers the tools for simulating data models, manipulating calculations, and comparing process and materials specifications are key facilitators in satisfying their demands for pinpointing relevant, reliable information. This paper provides a more in-depth look at the findings of user interviews and two recent studies of hundreds of ASME and AIChE member engineers conducted on behalf of Knovel. The study focuses attention on how respondents currently work with information and what they are looking for in process- and productivity-enhancing information tools|
|Chemical engineering students need many of the same resources that chemistry students do, but, in addition, need sources for bulk chemical prices, process flow diagrams, vapor-liquid equilibria, thermodynamic data, loss prevention, and business information about their chemicals. Their favorite sources are Perry's Chemical Engineers' Handbook and the Kirk-Othmer Encyclopedia of Chemical Technology. Bulk chemical prices used to be found in Chemical Market Reporter, but it has changed title to ICIS Chemical Business Americas, and has changed its focus. Process flow diagrams can be found in Kirk-Othmer, but also Kent and Reigel's Handbook of Industrial Chemistry and Biotechnology, Ullmann's Encyclopedia of Industrial Chemistry, McKetta's Encyclopedia of Chemical Processing and Design, patents, and some journal articles. Vapor-liquid equilibria can be found in Gmehling's Vapor-Liquid Equilibrium Data Collection, Knovel's Critical Tables, and Beilstein CrossFire. The latter two sources are also good for thermodynamic data. Lee's Loss Prevention in the Process Industries, available in book format or online in Knovel, is a good source for loss prevention. Business information can sometimes be found in the same databases that business students use, such as Lexis-Nexis, Business Source Complete, ABI/Inform, and Business and Industry, although two online databases may be more targeted: Chemical Business NewsBase and Chemical Industry Notes. But, these latter two databases are not easily available to most students.|
|The IUPAC InChI is an open source, public domain, international standard for representing a defined chemical structure. This presentation will describe the history, evolution, adoption and use of the IUPAC InChI/InChIKey project from its initial beginnings in 1999 to its current state of use, adoption, and acceptance by the worldwide chemical community. The remaining portion of this symposium will be devoted to "case studies" from commercial, non-profit, and government organizations who are using InChI/InChIKey and a panel session at the end for questions and discussion.|
|The task of finding chemical information online can be daunting since even the most rudimentary query on Google can provide tens to hundreds of thousands of links to peruse. While there has been an increase in the number of online chemical structure databases there has not been a central online resource allowing integrated chemical structure-searching of chemistry databases, chemistry articles, patents and web pages, such as blogs and wikis, until now. ChemSpider provides a significant knowledge base and resource for chemists working in different domains. From the perspective of the InChI identifiers this project can be considered to be a success story since ChemSpider has used both for the development of the database and the provision of fast searching routines. ChemSpider has provided web services for both InChI generation and searching, leading to a proliferation of InChI in the web-based domain of chemistry. This talk will provide an update of ChemSpider's functionality.|
|As the number of compound structures of potential interest continues to grow, so does the problem of correlation of those compounds. Both internal and external compounds of interest must be indexed such that the corresponding and closely related compounds can be quickly found and reported to interested client software applications. To address this growing problem at Pfizer we have used the InChI encoding schema for chemical structures. Uniquely designed to be segmented at various levels of specificity, the InChI makes the problem of finding, for example, stereo chemically related structures simpler and faster than other structure representations. We have processed all of the compounds in our files from both internal and external origins through a unique tautomer canonicalization followed by generation of the InChI string for each separate non-bonded fragment of the input molecular structure. Each unique InChI string is registered to an Oracle database with composition records, as required, for each registered compound component pointing to these unique InChI records. Using a file of known salt and solvent fragments, components of the molecule are assigned types of parent, salt or solvent at the time of registration. A web based service was created for client applications that can return, via the service, information about the compounds related to any given compound ID, structure, SMILES, or InChI string. Related compounds found in the database are categorized by match types such as Identical Parent, Different Stereo (mirror image), or Different Isotopic Labeling. Using the results of this service, client applications are able to provide their users with detailed information about compounds closely related to any compound that is otherwise identified by or of interest to the user, assisting them in fully exploring the known internal and external data around a compound or compound series of interest.|
|The award-winning Project Prospect was launched in early 2007 and would not have been possible without having InChIs to represent chemical compounds, both as a compact representation within XML and as a transport medium over the web. We describe how the existence of InChI provided the impetus to set up Project Prospect in the form it took, how we have built it into our workflows, the needs that InChI doesn't satisfy and how we are dealing with those, as well as giving an insight into our staff development programme. We hope sharing our experiences will speed the uptake of InChI among participants.|
|Science publishers are composed of hundreds of brands and products sourced from thousands of different authors and many different software systems. A critical part of the publishing process is handling these different inputs efficiently and producing a consistent product. A second challenge, unique to a chemistry publisher, is that many of our publications contain novel compounds that have not yet been registered in any compound registry, and therefore do not have a unique identifier associated with them. Wiley was one of the first publishers to employ InChI and a precursor of the InChI key as a publishing solution. Our publishing requirements included needing a compound identifier, a means of quickly identifying replicate records for the same compound, and a means of quickly matching look up requests. We describe our approach and experience deploying InChI in a real world publishing environment.|
|Nanotechnology's growing applications are fueled by the synthesis and engineering of myriad nanostructures, yet there is no systematic naming and/or classification scheme for such materials. This lack of a coherent nomenclature is confusing the interpretation of data sets and threatens to hamper the pace of progress and risk assessment. A systematic nomenclature that encodes nanostructures' overall composition, size, shape, core and ligand chemistry, and solubility is presented. A typographic string of minimalist field codes facilitates digital archiving and searches for desired properties. This nomenclature system could also be used for nanomaterial hazard labeling.|
|A report published in C&EN in 2005 on nanotech terminology said this, “It's basically been a free-for-all in the world of nanotech terminology. Quantum dots, nanoshells, nanopeapods—nanoscientists have been inspired by everything from Polish dumplings to Inuit landmarks when naming new nanomaterials.” Three years since, the state of flux hasn't gone away, although several recent efforts have made the picture clearer and helped crystallize a rudimentary framework on nanotech nomenclature. The derivative area of science (and of nanotechnology), namely environmental sciences, has as usual displayed a phase lag in fructifying its nomenclature. Yet, the pace of progress is such that by the time this presentation finally lights up on the screen, the author's preliminary thoughts on the topic would be obsolete. Despite that, we will make an attempt to summarize the most recent approaches in nomenclature in environmental sciences.|
|Literature related to the field of nanotechnology is growing rapidly – 1.8 percent of the records covered by Chemical Abstracts Service in 2000 contained the term “nano”; in 2005 that percentage had grown to 4.9 percent, and in 2007 it was over 8 percent. As expected, CAS scientists are seeing a quickly evolving nomenclature in this relatively new field of science. This presentation will discuss some of the examples and problems encountered in processing nano information, and solutions that CAS is adopting for indexing and substance representation. Specific examples will be illustrated.|
|The words used to describe and claim the invention determine the scope of patent protection for an invention. The words used to describe an invention do not necessarily change with size. They can be the same for the macroscale or for the nanoscale. Common words may be “too big” for nanotechnology and simply using the prefix “nano” may not be sufficient to accurately describe an invention. This presentation considers how words in patent claims are interpreted and how that impacts nanotech inventions. As nanotechnology continues to develop, so does nanotechnology patent practice. This presentation considers the applications of patent law and practice to nanotechnology and discusses how to use patent strategy to achieve effective robust nanotech patents.|
Nano. A popular culture term, a marketing term, and a magical key to unlocking research funding. But what makes something nanotechnology? The answer can depend on your background as much as your intentions. A few things are certain. The field is an interdisciplinary meeting of scientists, engineers, and companies. Diverse backgrounds create a wide range of interpretations, expectations, and conventions. The concepts are similar but the descriptive language can be quite different. Attempting to define nanotechnology provides a great jumping off point to begin to tame the challenge of defining standard terminology for this field.
I will explore the definition of nanotechnology from the perspective of a MicroElectroMechanical Systems (MEMS) engineer working in a group of self-proclaimed nanotechnologists. Starting with our “top down” techniques for device fabrication including nano-imprint lithography, atomic layer deposition, reactive ion etching, and plasma enhanced chemical vapor deposition I will introduce our language. This contrasts with synthesis techniques from chemistry or biology known as “bottom up” where molecules or systems are assembled up from the molecular level. I will look for the common ground at a high level and drill down on some topics to illustrate the current state and offer suggestions on how to get the diverse communities into using the same terminology. I will use examples from the ACS Nanotations wiki to highlight how an online community can be used to help develop standard terminology.
|PubChem is a free, online public information resource located at the National Center for Biotechnology Information (NCBI). The system provides information on the biological properties and activities of chemical substances, linking together results from different sources on the basis of chemical structure and/or chemical structure similarity. PubChem utilizes InChI as a means of structure input and output. This presentation will detail ways in which InChI may be used in conjunction with the PubChem resource, including data integration and data mining aspects.|
|The role of calculated compound identifiers is increasingly important as large collections of chemical structures are made available in online systems. The ability to correlate molecules and reactions across multiple sources is critically important to high performance delivery of related records from different sources. Historically, the progression from topologically derived text strings (WLN, SMILES) to connection tables (molfiles, SDfiles, RDfiles) and derived values (SEMA, InChI, NEMA and others) continues to bring us closer to the ultimate goal of a unique, globally standard, computed compound identifier. The role of Inchi keys and related values in delivering high performance access to large datasets of chemistry related information via web services will be examined.|
|Online searchable databases of structures, 3-D imagery and searchable formulae take chemistry information light years beyond what the printed page made possible. Chemists have also been amongst the most active in embracing blogging, and other web 2.0 initatives such as open lab notebooks. In this environment, the challenge is on for publishers to deliver journals that go way beyond traditional publishing models. Nature Publishing Group, in launching a new chemistry journal, has been able to look at the current status of chemistry publishing and develop a new generation of tools and approaches embracing these new opptunities. This talk will outline these new developments and pose questions for the future.|
|We present a comparison of the IUPAC InChI/InChIKey Identifiers with our CACTVS hashcode-based NCI/CADD Identifiers. Both types of identifiers are calculated in the context of, and are available in, our Chemical Structure Lookup Service (CSLS) available at http://cactus.nci.nih.gov/lookup, which currently indexes approx. 57 million chemical structure records representing about 40 million unique chemical structures. Like the IUPAC identifiers, our NCI/CADD Identifiers have been specifically designed to enable a fine-tunable yet rapid compound identification even in very large datasets. They can be set to be sensitive to a variety of chemical features such as tautomerism, different resonance structures drawn for a charged species, and fragments such as counterions. We will discuss the differences in structure identification between the NCI/CADD and the IUPAC identifiers that we have observed in this very large structure set, and what these discrepancies can tell us about definition and design, scope, limitations and problems in either set of identifiers.|
|Predictive scoring functions based on statistical learning techniques generally require large amounts of quantitative training data. Unfortunately this numerical knowledge is usually unavailable or prohibitively expensive to obtain. For practical application however, experts often only require qualitatively precise results to define accurate ranking orders. Inspired by the inherent reaction prediction capability of human chemists, we propose a novel machine learning technique in the context of state energy calculations. QM/MM and wet lab experiments can supply some quantitative energy data, but are impractical to run on a large scale. In contrast, chemists exhibit significant problem-solving ability without making exact numerical calculations. Rather, their decisions are based solely on qualitative knowledge of trends and ranking orders in molecule stability and reaction rates. Our method utilizes the limited quantitative experimental data available together with this qualitative information to yield score functions accurate enough to reproduce the problem-solving capability of human experts.|
|Current scoring functions often fail to correctly prioritise compounds according to their known binding affinities. Previously, negative training data has been employed in scoring function optimisation. A genetic algorithm optimises a function to rank a known binding mode in preference to noisy decoy poses – this having the advantage of explicitly accounting for disfavoured interactions. We present a more targeted multiobjective approach. Using the Astex diverse dataset, we dock with an impaired version of GOLD to generate diverse decoys for each protein. Using a multiobjective evolutionary algorithm, we demonstrate a scoring function optimisation protocol. Optimising every pair-wise combination of the 85 members of the Astex diverse dataset we show that contentions exist in the optimal scoring function, suggesting that no global function exists for all targets. We extend this method to cross-docking to incorporate protein flexibility, optimize to particular targets and target classes, and demonstrate performance in virtual screening.|
|The long term goal of this project is to develop a computerized system with problem-solving capabilities in synthetic organic chemistry comparable to those of a human expert. At the core of such a system should be the ability to predict the course of chemical reactions to, for instance, validate synthesis plans. Our first approach, based on encoding expert knowledge as transformation rules, achieves predictive power competitive with chemistry graduate students, but requires significant knowledge engineering to expand its coverage to new reactivity. To overcome this limitation and achieve greater predictive power, our current approach is not based on specific rules, but instead upon general principles of physical organic chemistry. These principles allow the system to elucidate the mechanistic pathways and reaction coordinate energy diagrams of simulated reactions. These results directly mimic the qualitative problem-solving ability of human experts, but with the speed, precision, and combinatorial power of an automated system.|
|Perhaps the most commonly used molecular interaction potential is the GRID field, comprised of a discrete grid placed over a molecule for which potential interaction energies between the molecule and a probe group (e.g. water) are calculated at each vertex. However GRID fields can be very large so that it is infeasible to align molecules based on their GRID representations. We show that the Daubechies 4-tap wavelet transform can be exploited to represent finely sampled GRID maps in 1.1% to 1.5% of the storage of the original fields. The reduced representations can be used in ligand-based similarity searching without significant loss of accuracy compared with using the whole field. The efficacy of other wavelets and the fast Fourier transform are also examined. We also describe the impact of wavelet approximation upon the retrieval of actives from decoys, and a method for generating molecular alignments based on the reduced GRID fields.|
|The angiotensin II type I (AT1) receptor is a family A GPCR that mediates the renin-angiotensin system (RAS), a well-characterized pathway for blood pressure regulation. A class of non-peptide AT1 antagonists, called “sartans”, have been used to successfully treat hypertension. The ability to design more effective AT1 antagonists is of great pharmaceutical interest and relies on understanding the role of Lys199 in the binding site. Does this positively charged residue interact with the anionic tetrazole ring of many sartan drugs or not? In order to address this question, a comparative model of the AT1 receptor was constructed using the newly crystallized β2-adrenergic receptor as a template. This structure was relaxed using molecular dynamics in explicit solvent and lipids. Diverse AT1 antagonists were docked in this model, guided by SAR data and binding affinity trends. The results agree well with experimental information and suggest a novel binding orientation.|
Fragment-based methods have become established over the past ten years as a powerful approach in structure-based lead discovery, with a number of compounds now entering clinical trials. The recent successes have led to the methods being adapted to varying degrees within most pharmaceutical companies.
As with any screening approach, the design of the library is crucial. As well as the usual criteria of compound diversity and chemical suitability, a fragment library is also constrained by the methods used to detect binding and how the fragments are going to be used. The initial versions of the Vernalis library were selected based on fairly well defined criteria that included cheminformatics filters and manual assessments of chemical tractability. Over the past seven years, the library has evolved considerably based on our experience in screening a wide variety of different target classes. The new factors that are taken into account include experience of the medicinal chemists with evolving the fragments, design of new fragments to explore binding hypotheses and the challenge of new protein-protein interaction targets. In addition, practical considerations such as compound stability and continued commercial availability have had an impact.
This presentation will briefly review the evolution of the library and our experience of utilising fragments for drug discovery projects. The main focus will be on a recent analysis that contrasts the physico-chemical properties of the library with the hits seen against various classes of targets. We will discuss what implications this experience has for the design of the next refresh of our library.
|One of the outstanding issues in de novo design is the generation of molecules that are synthetically accessible and which also represent non-obvious structural transformations. We have developed a knowledge-based approach to de novo design which is based on reaction vectors that describe the structural changes that take place at the reaction centre, along with the environment in which the reaction occurs. The reaction vectors are derived automatically from a database of reactions which is not restricted by size or reaction complexity. A structure generation algorithm has been developed whereby reaction vectors can be applied to previously unseen starting materials in order to suggest novel syntheses. The approach has been implemented in KNIME and is validated by reproducing known synthetic routes. We then present applications of the method in different drug design scenarios including lead optimisation and library enumeration.|
|Fragment-based drug discovery has become an active field in academics and industry. CAS has been identifying key concepts and substances found within the associated documents in the world's largest repository of chemistry-related information. Previously it has been reported that bioactivity related concepts and more than 30,000 specific targets have been associated with specific substances. Making use of these features, it is possible to explore the chemical space around the fragments and discover the new relationships. Specific examples of this functionality will be provided.|
|ArQule's parallel synthesis technology is a powerful tool and it is continually being expanded. We recently undertook a case study to construct a new virtual chemical space from fragments derived from available reagents and more than 30 ArQule Platform Chemistries. This procedure seeks to improve the synthetic accessibility of potentially valuable hit molecules and takes advantage of the diversity of commercially available reagents. FTree-FS software from BioSolveIT GmbH allows efficient searching across this space for novel chemical matter that shares chemical features with the known active molecules but with improved synthetic accessibility. We also describe the application of this new chemical space searching paradigm as part of ArQule's Kinase inhibitor platform (AKIPTM) to identify kinase inhibitors with a type IV mechanism of action.|
We present LoFT, a new approach for focused combinatorial library design. In contrast to existing methods chemical fragment spaces, which mainly consist of a collection of fragments and connection rules, are used as the underlying search space. Selecting one or several core fragments with the same link pattern, a focused library can be designed.
LoFT combines classical physicochemical design criteria with the feature tree descriptor for similarity/dissimilarity measurement. By applying the comparison directly on fragment level, we are able to design focused libraries efficiently without explicitly combining the fragments. Several stochastic algorithms are provided for traversing the search space, employing a weighted multi-objective scoring function, filtering rules and diversity mechanisms. Besides e.g. simulated annealing and threshold acceptance, a cherry picking, which selects the n best products from the search space, is available.
For validation, LoFT was applied to several drug design scenarios. Starting with known drug molecules, we generated focused libraries within desired property ranges.
|The chemical information profession and libraries in general face a host of new technologies that have and will transform the daily practice of librarianship. Some of these new technologies include search engine applications such as Google Scholar, Open Source applications, new publishing models such as Open Access, and emphasis on building digital repositories and preserving digital library content. This lecture gives an overview of these technologies, the challenges to traditional library spaces and services, and offers solutions to address relevancy in this changing environment, as well as long-term career growth and skill-building strategies.|
|As academic libraries continue providing seamless access to information in electronic format, more and more users tend to study and find information on-line. This is most evident in the Google Generation of our students who grew up navigating multi-media and information technologies. However, do they really get what they are looking for? What are they actually searching? How do they feel about their information search skills or information resources they use? On the other hand, how can we, librarians, provide effective chemistry information research training to them? Especially with more new technologies available, how can we effectively apply them to our services and train students become chemistry information literate? This presentation will try to answer above questions based on author's work with chemistry graduate students at the University of Southern California. Many new technologies, such as blog, Youtube, Slideshare, on-line tutorials created by Camtasia, etc., are adopted and applied to library training in different learning occasions (e.g. new students' orientation, on-going workshop, and reference services, etc.). It is hoped this work can provide an example for nowadays chemistry information training. The related links are (1) orientation link (http://chemusc.wordpress.com/for-students/new-graduate-students-orientation-fall-2008/) and on-line tutorials link (http://www.usc.edu/libraries/subjects/engineering/tutorial/index.php).|
|The traditional role of the chemical information specialist as a searcher and mediator is challenged by the increased availability of databases and electronic publications at the workbench of the researcher. This results in decrease of costumer contact and retention. Teaching of information literacy is often taken up as counter-strategy. Due to time restraints it is focused on the most important sources and hence must be complemented with information services that support the user in locating and judging the appropriate source. One important quality that comes here into play is the knowledge about subjects and information sources that the information professional acquires through evaluation of information products and through cooperation with publishers, information providers, database producers and customers. We will present strategies and individual projects and examples – such as a recent evaluation of the chemical content of Wikipedia and the Römpp Chemistry Lexikon – for how sharing specialist's and user's knowledge may enhance products and library information services.|
|The dramatic changes in the nature of scientific publishing and communication in recent years have had a direct impact on the nature of the academic library as place. Spaces previously earmarked by libraries for the growth of our print collections should be reexamined for other purposes as we locate significant portions of their collections into storage facilities, cancel print journal subscriptions, and withdraw unneeded materials. Space in the center of our campuses for offices, classrooms, and other facilities is at a premium, and libraries are being targeted by administrators as a source of new space for these purposes. Libraries need to forge new partnerships with campus units whose programs complement our own. Librarians must become versed in the principles and practices of programmatic planning, design, and assessment of learning spaces. Recent activities and thinking about reinventing science library spaces at the University of Chicago Library are described.|
|Over the last few years, the University of Minnesota Libraries conducted two studies of research-related habits of faculty, graduate students, and other researchers; one addressed the social sciences and humanities, and the other focused on the sciences. A major goal was to identify needs not currently being met where the libraries might be able to play a role in providing solutions. One key observation was the growing difficulty scholars and researchers have keeping up with the literature in their fields and subsequently managing that information. In response to this problem, the libraries formed an exploratory group with the goal of finding a more systematic approach to current awareness and personal information management. The group assessed existing tools and potential opportunities for collaboration and services. I will be discussing their recommendations and the resulting best practices guidelines for researchers.|
Although originally developed for complete de novo ligand design, the SPROUT software suite provides a set of tools ideally suited to the design of ligands incorporating one or more small fragments known through experimental methods (such as x-ray crystallography or nmr) to bind to specific regions of a target protein. In the case of a single fragment with known binding pose, SPROUT LeadOpt is able to apply a reaction knowledgebase and a set of available starting materials to carry out virtual reactions on the fragment to generate hypothetical ligands which are both readily synthesisable and also predicted to bind strongly to the target. Where two or more fragments bind in different regions, SPROUT is able to link them together, redocking to maintain the original poses, although also allowing some movement limited by user selected tolerances.
The technology used will be discussed together with examples illustrating its application.
|The selection of appropriate molecules for incorporation into a fragment screening library is driven by the experimental technique with which binding will be detected and the way in which the hit information will ultimately be used in the lead development process. To design libraries for crystallographic fragment screening we have developed both general methodologies and rule-based filtering software to select appropriate fragment molecules from large commercial collections. Application of these procedures enables the flexible design (and redesign) of libraries for general target screening and the design of small focused libraries in which the molecules have more specific properties. Our approach to identifying early lead development candidates from collections of purchasable or synthesizable compounds uses the specific binding information from small fragment hits found in the crystallographic screen and incorporates procedures that maintain consistency between the dimensions of the lead development molecule and the structure of the target site.|
|In this work we demonstrate that MCSS (Multiple Copy Simultaneous Search) is a powerful CHARMm-based method for docking and minimizing small ligand fragments in an active protein-binding site. The performance and ability to recover the positions of native ligand-protein complexes was investigated using a novel, fully automated, and workflow-based MCSS implementation. Accurate scoring and placement of fragment is crucial when using MCSS in fragment-based ligand design and we present validation using several small protein-fragment complexes. The results show that MCSS is able to recover the X-ray poses, and, with only some exceptions, score the pose correctly.|
|Astex Therapeutics has pioneered the application of fragment based drug design. Here we will briefly describe how fragment based drug design was used to identify AT7519, a novel CDK inhibitor which is currently in clinical trials. We will then go on to discuss how fragments and molecules identified during the CDK project were used to develop novel Aurora inhibitors. This work led to the identification of AT9283 which is also currently in clinical trials. The talk will discuss how state-of-the-art computational tools and structure based drug design were used to optimise the candidates.|
|E-research or networked science is data driven, both a consumer and producer of data. Faced with this data deluge, academic librarians have been redefining their role and potentially the mission of the libraries. As more government granting agencies require institutional data archiving and public access, librarians have become sought-after partners for input on research grants. Although, the issues surrounding data management remain complicated, the basic principles guiding librarianship and research support still apply: harvesting, describing, archiving, and access. As the libraries at Purdue, Stanford, and Cal Tech offer models of data management, the real challenge for academic librarians will be in tailoring data stewardship at the institutional level.|
|Data-based research, or eScience, is a growing area in chemistry. Access to research data involves capture, indexing, curation, preservation, and rights management, all areas with which chemistry librarians have some knowledge and experience. Librarians are seen in the increasing number of white-papers on data as having a significant role in the success of data science, especially in education and curation. As this research approach develops in the chemistry discipline, issues are mounting, especially with current publishing models. Chemistry information professionals need to be aware of data issues, data science and data curation, to support the increased data needs of all chemists, understand the development of eChemistry as a research area, and interface with the publishing and communication venues in chemistry. This presentation will discuss the state of eScience in chemistry research and how we at the Cornell University Library are beginning to approach it, with implications for essential skill building for science librarians.|
|This presentation will cover the basic information of the new NIH public access policy and author rights issues related to this policy, and report on a recent ethnographic study of NIH funded authors. During fall 2008, the MIT Libraries conducted a qualitative study of NIH funded researchers' publication process in order to better understand the decision-making and workflow process that researchers use to disseminate their research. The results of this study will inform the Libraries about appropriate services to offer to assist NIH researchers when publishing.|
|Transcription factor Nuclear factor-κB (NF-κB) is a protein complex found in almost all animal cell types. NF-κB is involved in regulating immune response to infection and has been linked to cancer, inflammatory and autoimmune diseases, septic shock, viral infection, and improper immune development. It is also involved in celluar responses to stimuli including stress, free radicals and bacterial/viral antigens. Using over 400 known agonists and antagonists of NF-κB obtained from the literature, we computationally identify structurally similar clusters of compounds which interact at specific locations of the NF-κB cellular pathway.|
Many methods have been developed to capture the biological similarity between two compounds for used in drug discovery. One of the disadvantages in conventional 2D similarity searching is that molecular features or descriptors that are not related to the biological activity carry the same weights as the important ones. To overcome this limitation, we introduced a novel similarity-based virtual screening approach based on Bayesian inference network, where the features carry different statistical weights, with features that are statistically less relevant being deprioritized. Here, similarity searching problem is modeled using inference or evidential reasoning under uncertainty. An important characteristic of the network model is that it permits the combination of multiple queries, molecular representations, and weighting schemes. Our experiments demonstrate that similarity approach based on the network model is outperform the Tanimoto similarity approach with reasonable improvement, thus offering a promising alternative to existing similarity search approaches.|
FlexNovo is a molecular design program for structure-based de novo searching in large fragment spaces following a sequential growth strategy. Having the active site as structural information, it uses fragment spaces as input that consist of several thousands of chemical fragments and a corresponding set of rules, which primarily specifies how the fragments can be connected with each other. Synthesizability can be ensured by several placement geometry, drug-likeness and diversity filter criteria that are directly integrated in the build-up process.
FlexNovo can be used for fragment expansion, e.g., starting from an Xray structure that has been produced in a fragment screen. Or it can be used entirely in a de novo fashion where the algorithm places fragments arbitrarily in the pocket and then grows the compound from the most promising ones.
We demonstrate the performance of FlexNovo on a few relevant medicinal chemistry projects.
|Using a pharmacophore to describe the interactions between a biological target and its corresponding ligands is an established virtual screening tool at the early stages of drug discovery. In recent years, the use of fragment-based approaches in drug discovery has gained wide popularity. In general, a fragment-based approach is very desirable since starting with low molecular weight fragments (rather than full-sized molecules) offers the advantage of increased sampling of chemical space and the possibility of improved drug-like properties. We have introduced an in silico method that utilizes pharmacophores for a combinatorial fragment-based approach applicable to both the design of novel compounds, and for lead optimization. Given a pharmacophore, small molecular fragments can be rapidly assembled into new molecules. Here we illustrate how applying this methodology was instrumental in our lead refinement efforts.|
Theoretically, any docking engine can be used to place small molecule fragments into the active sites of receptors and score them. However, most methods suffer from the under-defined constraints -- small fragment in a large cavity -- thus perform inadequately. In contrast, the eHiTS engine  has been designed to work exactly in this scenario: it breaks down larger ligands into small fragments and docks those independently, then reconnects the poses. eHiTS provides very accurate (about 0.5A RMSD) pose prediction for small fragments and capable of linking them up without significant loss of the accuracy. The method will be presented with practical examples on how to use eHiTS for fragment based structure design. Validation results will be presented to demonstrate the method's accuracy.
 Z. Zsoldos, D. Reid, A. Simon, S.B. Sadjad, A.P. Johnson: eHiTS a new fast, exhaustive flexible ligand docking system; J.Mol.Graph.Modeling. (26), 1, 2007, 198-212; doi:10.1016/j.jmgm.2006.06.002 >
In silico approaches considering either descriptor-, ligand- or structure-based information for navigating within chemical fragment spaces have been established within the leadfinding phase of a drug design project. One open question still remains about the compilation and setup of fragment spaces. Therefore we have compiled a new and elaborate set of rules for the breaking into retrosynthetically interesting chem. substructures (BRICS) and used this for obtaining chemical fragments from biol. active compounds and vendor catalog sources.
Based on our studies three new fragment sets have been compiled, with different optimized performances in retrieving random sets of queries from different sources, which are available at http://www.zbh.uni-hamburg.de/BRICS .
In addition we performed a comparative study of the BRICS fragment space with fragment spaces derived from kinase inhibitors. In our presentation we will highlight the similarities as well as the differences between these two fragment universes
|The selection of the solid form for development is a milestone in the conversion of a new chemical entity into a drug product. An understanding of the materials science and crystallisation of a new active pharmaceutical is crucial at the interface of drug substance manufacturing and drug product processing. In this presentation the broad challenges facing pharmaceutical scientists, as a consequence of polymorphism, hydrate and solvate formation during product design will be highlighted. The opportunities presented by structure based computational tools to help address these challenges will be presented in terms of a framework that addresses both the business need and the new emerging regulatory environment.|
|Cocrystallization has recently been gaining popularity within the pharmaceutical industry as a viable method for producing a solid dosage form. The effective use of cocrystallization for this purpose is clearly affected by the capacity with which the solid forms produced can be controlled and predicted. The number of potential coformers that could be used in a cocrystallization screen for a drug molecule is, for example, vastly greater than the number of possible counterions for a salt screen. Towards this goal, the systematic analysis of a family of structures containing a common molecular species can aid significantly in the understanding of the solid state behaviour of the particular system and cocrystallization in general. This contribution introduces a recently developed set of computational tools to analyse crystal packing patterns and applies them to investigate concepts such as motif competition and the adherence to Etter's first rule in a dataset of pharmaceutical cocrystals.|
|Crystal Engineering studies can be credited with giving rise to the recent interest in cocrystals (molecular complexes) of pharmaceuticals as a means of improving the physical properties of pharmaceutical dosage forms. Hydrogen bonds have been the traditional tool used for cocrystal design as well for analysis of crystal structures. The fact that hydrogen bonds can be observed in crystal structures and visualised easily does not necessarily mean that they are 'structure-directing' and other, less directional, interactions may be energetically competitive. Topics will include the importance of dispersion-dominated packing interactions in cocrystals compared to that of hydrogen bonds as well as investigating packing similarity, polymorphism, pseudo-isostructurality, and the occurrence of common 1D channel structures within a family of related cocrystals containing a common active pharmaceutical ingredient (API).|
A methodology has been developed to assess the likelihood of hydrogen bond occurrence in crystal structures . A reliable prediction is potentially very valuable during pharmaceutical solid form selection since these strong, consistent interactions are crucial to structural stability , and likely variations often indicate polymorphism .
Its application will be demonstrated on a selection of existing polymorphic APIs. Characterisation literature for these is available, providing relative stabilities for comparison. Stable and metastable polymorphs are shown to differ significantly by the extent of low propensity hydrogen bonds.
The methodology is based on a model function optimized on hydrogen bonding data of related, known compounds. Once a model is derived, only a target chemical diagram is required for prediction owing to the form of descriptors: topological and chemical parameters which describe influences such as steric accessibility, competition between groups, and donor and acceptor type. Their form and influence will also be discussed.
1. Galek, P.T.A., Fábián, L., Allen, F.H., Motherwell, W.D.S. & Feeder, N. (2007). Acta Cryst. B63, 768-782.
2. Bernstein , J. (1993). J. Phys. D: Appl. Phys. 26, B66-76.
3. Singhal, D & Curatolo, W (2003). Adv. Drug. Del. Revs. 56, 335-347.
|The goal of predicting the solid state structures of an organic molecule from its molecular structure alone has attracted considerable industrial interest. The difficulty of the task is demonstrated by the regular Blind Test in Crystal Structure Prediction (CSP), which is hosted by the Cambridge Crystallographic Data Centre. In this contribution, the previous Blind Tests are briefly reviewed and the successful application of a new CSP approach to all four compounds (including a co-crystal) of the 2007 Blind Test is presented (see also Neumann, Leusen and Kendrick, Angewandte Chemie International Edition, 47: 2427 – 2430 (2008)). The central part of the new approach is a hybrid method for the calculation of lattice energies that combines density functional theory simulations with an empirical van der Waals correction. Typical applications of the new methodology will be discussed, as well as its limitations.|
|We will discuss recent advances in applying molecular mechanics based scoring methods to protein-ligand complexes. Some key issues that will be addressed are sensitivity to the 3D receptor model, treatment of solvation, effects of conformational sampling, discrimination between binding modes, high-throughput applications and the use of force field energies in QSAR models.|
Aminergic GPCRs have been in the focus of pharmaceutical research for the past decades. Due to the lack of crystal structures, all efforts had to be limited to ligand- and homology model-based methods, however. The recently solved structure of the ß2-adrenergic receptor now offers the opportunity to use structure-based design approaches. Consequently, we carried out a virtual screening campaign using the program DOCK and the 1 M molecules of the "lead-like" subset of the ZINC library. Upon testing of 31 selected molecules, six were found to be active with binding affinities below 7 µM, with the best compound binding with a Kd of 17 nM.
In order to evaluate routes for improving the ranking and investigate the energetic contributions for binding to ß2-adrenergic, we calculated Linear Interaction Energy (LIE) models based on binding data obtained from literature. Specifially, we used the LIECE (Linear Interaction Energy with Continuum Electrostatics) approach that has been developed by Huang and Caflisch. The resulting model with good predictivity was used to reevaluate the six hits of the primary screening as well as an in-house data set. Interestingly, the coefficients for the energy terms differ significantly from previously published LIECE models for proteases and kinases, which demonstrates the distinctness of GPCR binding sites.
|Target-specific optimization of scoring functions for protein-ligand docking is able to achieve significant improvements in the discrimination of active and inactive molecules. This concept can be extended by taking into account not only a single target structure but an ensemble of structures from a target family. The objective function, however, has to be generalized for this case and a suitable global optimization algorithm has to be applied. It is shown that the virtual screening performance for kinases improves significantly upon using scoring function parameters optimized specifically for that target family. Additionally, the major reason for improved screening performance on kinase targets is identifed. In summary, a general framework for the global, multi-objective optimization of scoring functions is presented which allows for taking advantage of prior knowledge in a systematic, effective, and robust way.|
|Screening our 1.5 million compound archive requires 6 months and $1,000,000. Profile-QSAR is a novel kinase-specific, fragment-based, 2D modeling method that combines data for >100,000 compounds against >70 kinases to produce fast, accurate, kinase activity predictions for iterative screening. Since fragment-based methods loose accuracy for novel chemotypes, docking is also employed. However, conventional docking suffers 3 limitations: 1) it requires a target protein structure, 2) is slow, and 3) does not correlate with affinity. Using medium-throughput experimental activity data, AutoShim adjusts pharmacophore “shims” to produce highly predictive, target-specific, scoring functions. Over 5 months, our entire archive was pre-docked into a “Universal Kinase Surrogate Receptor” of 16 diverse kinase crystal structures. AutoShim can now be “shimmed” for new kinases with experimental binding data to accurately predict activity for 1.5 million compounds in hours instead of weeks, without a crystal structure. Together, Profile-QSAR and AutoShim produced effective iterative screens.|
Predictive scoring functions based on statistical learning techniques generally require large amounts of quantitative training data. Unfortunately this numerical knowledge is usually unavailable or prohibitively expensive to obtain.
For practical application however, experts often only require qualitatively precise results to define accurate ranking orders. Inspired by the inherent reaction prediction capability of human chemists, we propose a novel machine learning technique in the context of state energy calculations. QM/MM and wet lab experiments can supply some quantitative energy data, but are impractical to run on a large scale. In contrast, chemists exhibit significant problem-solving ability without making exact numerical calculations. Rather, their decisions are based solely on qualitative knowledge of trends and ranking orders in molecule stability and reaction rates. Our method utilizes the limited quantitative experimental data available together with this qualitative information to yield score functions accurate enough to reproduce the problem-solving capability of human experts.
|Developing new and stable crystal forms for drug product development remains a challenge from both a commercial viewpoint as well as from our need to further understand molecular aggregation and crystal packing. Our understanding of molecular recognition, supramolecular chemistry and crystallization phenomena help in what is frequently referred to as “crystal engineering”. The ability to couple experimental observations with data in the CSD presents real opportunities. Multicomponent crystals (where two or more distinct chemical species are present in the crystal) is an area of particular interest to pharmaceutical chemists where salts, hydrates and cocrystals (amongst others) can all be possible outcomes of a crystallization process. Screening for all possibilities becomes critical and while addressing some of the above issues I will also outline recent developments in mechanochemical methods as a screening tool.|
Crystal engineering facilitates discovery of new crystal forms for long known molecules that are of practical utility such as active pharmaceutical ingredients, APIs. This contribution will focus upon an emerging class of crystal form, pharmaceutical cocrystals, with emphasis upon the following:
- A historical perspective of this long known but little studied class of compounds;
- Statistical analysis of the probability that certain supramolecular heterosynthons will exist in the presence of competing functional groups, i.e. how to select co-crystal formers for APIs using statistics generated from the Cambridge Structural Database;
- Examples of new co-crystals that include some long known natural products and APIs and how they fine tune physical properties of clinical relevance;
- An analysis of polymorphic co-crystals that focuses upon the persistence of supramolecular heterosynthons in polymorphs.
A database of organic cocrystal structures was extracted from the Cambridge Structural Database. Molecular descriptors were calculated for all molecules in the cocrystal dataset. The resulting database describes pairs of molecules that form cocrystals with each other in terms of the calculated molecular properties.
The properties that are generally similar or complementary for molecules in a cocrystal were identified by using correlations between the corresponding molecular descriptors. Two-dimensional density plots and box plots were created to visualise the observed trends and to elucidate their statistical significance.
The results show that cocrystals are usually formed by molecules of similar shapes and polarities. Analysis of previous cocrystal screening experiments clearly demonstrates that the efficiency of screening can be increased by considering shape and polarity descriptors. Unusual cocrystals that are formed by molecules of different polarities and shapes may help in the qualitative understanding of the chemical reasons behind the statistical results.
|A key element of preclinical drug development includes the assessment of physical form diversity. In this context, it is not uncommon to see crystal structures of novel polymorphs, solvates and salts being solved from XRPD data using global optimization approaches, such as simulated annealing. As a general rule of thumb, the more complex the structure, the more difficult it is to locate the global minimum in the real-space search. It is, therefore, beneficial to develop strategies that maximise the chances of solving structures that have a high number of degrees of freedom (DoF). Here, we investigate strategies based on the CSD and the Mogul program. When the number of internal DoF in a global optimization is high, Mogul torsion angle search space restrictions can increase the chances of solving a structure. CSD-derived geometry information is also advantageous in Z-matrix construction and in the derivation of restraints in Rietveld refinement.|
|Crystal form technology is a powerful tool that can present certain scientific and legal opportunities during innovation in pharmaceutical materials development. From a scientific perspective, the intelligent and efficient design of an optimum crystal form can potentially facilitate development and expedite regulatory approval. From a legal standpoint, these same potential advantages may, in certain cases, confer patentability on innovative advances in the crystal form technology surrounding a development candidate. As a result, crystal form technology represents an important intersection between science and the law—an intersection that continually evolves in response to both scientific and legal developments. This presentation will summarize the latest prominent court cases addressing patentability in the pharmaceutical field. It will then discuss significant recent advances in crystal form technology, and it will offer an outlook on how such scientific advances may impact the legal standard for patentability in this key area of pharmaceutical development.|
This work consists of developing scoring function to prioritize ligand poses in a receptor site. Our past efforts in this area resulted in the development of LigScore1 and LigScore2 functions(1). They were obtained by looking for scoring functions that would reproduce the observed binding affinities (pKi values) using experimentally observed ligand poses. Those studies employed a variety of protein systems. These functions have some success with predicting binding affinities in the cases tested, while working with other systems suggest that the performance of these functions need improvement. The scoring problem is a complex one, as seen by efforts described in the literature; it seems the task of using a single scoring function to deal with a wide variety of protein systems is a tall order. One then wants to see if a scoring function can be developed for a single class of proteins. If that can be done to a higher degree of accuracy for a handful of protein systems, one may then be able to address the problem of understanding the changes in scoring functions required for different classes of proteins.
We present a workflow based statistical algorthms to fine tune LigScore functions for a specific class of proteins. The workflow involves preparation of proteins including protonation using pK prediction algorithm, hydrogen additions using HBUILD algorthm. The statistical methods involve regression using LigScore parameters obtaining coefficients to optimally fit observed pKi values for each ligand. We compare the LigScore coefficients obtained with two different classes of proteins such as HIV protease and Kinases.
(1) Journal of Molecular Graphics and Modelling, Vol. 23, Issue 5, April 2005, Pages 395-407
|We have recently developed a novel iterative knowledge-based scoring function for protein-ligand interactions and protein-protein interactions, referred to as ITScore and ITScore-PP, respectively. The key idea is to extract atom-based, distance-dependent pair potentials from a large training set of native and decoy complex structures. ITScore and ITScore-PP have been extensively tested for binding mode and affinity predictions, using diverse test sets published in literature. The results were compared with other scoring functions. ITScore and ITScore-PP showed very good performance. Inclusion of the entropic effect and desolvation effect further improved the predictions.|
|The eHiTS scoring function departs from traditional atom-based scoring, uses a novel concept of scoring interactions based on Interacting Surface Points (ISP). Statistically derived empirical function is constructed using 4-parameter geometric description of the relationship between ISP pairs. The energy associated with ISP pairs is deduced from statistics using the Boltzmann distribution function. Temperature factors were considered to account for variable uncertainty of the atom positions in PDB X-ray structures. Additional scoring terms: desolvation energy, ligand conformational strain, entropy loss upon binding, pose depth within the binding pocket and reproduction of key interaction patterns. Receptor cavities are automatically clustered based on shape and surface similarity and specific weight sets are adapted for each cluster. Results are demonstrated on the Acetylcholine Binding Protein (AChBP) with key cation-Pi interactions. eHiTS produces the correct pose with the best score and gives good correlation with experimental binding affinities.
|We will report on our latest version of the Glide XP scoring function which has been developed to calculate binding affinities for diverse compounds. Our new results demonstrate both the ability to rank order diverse compounds, and to reject random database ligands with a proficiency that is significantly better than previous efforts along these lines. The scoring function is global with the exception of core reorganization parameters which are associated with significant induced fit structural changes of the receptor; such effects cannot be modeled in principle by an empirical function which considers only protein-ligand interactions, and therefore must be incorporated into the model as offsets. A number of novel components, including receptor strain energy induced by ligand rings, explicit use of a water displacement functional generated by molecular dynamics, and many special terms for unusual chemical interactions such as pi-cation interactions, have been incorporated into the scoring function.|
|The question has often been asked of late whether structure-based virtual screening is inherently superior to ligand-based screening or vice versa. A little reflection shows that the distinction between the two approaches is largely an artificial one, particularly when 3D QSAR methods are being compared to docking with adaptive scoring functions. Both areas have a marked proclivity for producing misleading statistics, especially where "performance" is concerned, but they have other things in common as well. The underlying similarities and differences will be discussed, along with recommendations for minimizing the problems encountered in applying either prospectively, where the distinction between "empirical" approaches and those based on "first-principles" is probably more important.|
The Family History Archive is a growing collection of thousands of digitized (full text) published genealogy and family history books. The archive includes family, county and local histories, how-to books, magazines and periodicals, medieval books, and international gazetteers. The books come from the FamilySearch Family History Library, and several other major genealogical collections nationwide. It can be accessed from www.familysearch.org or from www.familyhistoryarchive.byu.edu, free of charge. Items may be searched by author, title, surname, keyword, or full text.
We will also briefly talk about the history of the Archive, why partner libraries joined the project and how they were selected, and what criteria they follow to place books in the Archive.
The presentation will largely focus on the processes we follow to digitize so many books, what equipment and software we use – including how it works and what modifications were made so that it would meet our needs.
|Genetic Genealogy is a powerful new tool, which is used in conjunction with family history research. FamilyTreeDNA pioneered this field when in April of 2000 it made available to the wide public what until then was restricted to the academia and research. There are two basic types of DNA tests available for genealogy: Y-DNA and mtDNA tests. The Y-DNA test is only available for males, since the test involves testing the Y-chromosome, which is passed from father to son. Both males and females inherit mtDNA from their mothers. Testing mtDNA provides information about the direct female line of the person. Because the Y-chromosome typically follows surnames, there is a much wider range of applications for Y-DNA testing, and a much broader spectrum of problems that can be solved, and information that can be acquired, especially when utilizing a large comparative database. This will be the main focus of the presentation.|
|We describe how forensic genealogy and DNA analysis were used to identify severely compromised remains found in the debris field of Northwest Airlines Flight 4422 that crashed in 1948 in a remote area of Alaska. The frozen human arm and hand, discovered in 1999, were assumed to belong to one of the thirty crash victims. Despite the challenges faced by DNA analysis and fingerprint matching such degraded remains, by September 2007 all but two victims had been ruled out by either or both techniques. Victim #29 presented additional problems due to the difficulty in locating a mitochondrial DNA reference for his maternal family line in Ireland. We report how these challenges were overcome by forensic methods of genealogical research combined with new DNA analysis techniques to make a positive identification of remains that had been preserved in a glacier for over 50 years.|
|Utah Population Database (UPDB) is a unique resource of genealogic data linked through probability modeling to causes of death and Utah and Idaho cancer records. UPDB has been used by geneticists to select families likely to have a genetic condition and identify the genes involved. The APC gene, responsible for Familial Adenomatous Polyposis (FAP) and colorectal cancer is one such example. A Utah pioneer family from the 1840's and a family from New York carry an attenuated form of the FAP (AFAP). They were linked through genealogy records to a couple who came to America from England around 1630 and descend 16 generations to present day. Genetic analysis of fifteen families from across the USA with this same APC mutation shows that they are related. In view of the apparent age of this mutation, a notable fraction of colorectal cancers in the USA could be related to this founder mutation.|
|We have developed a scoring function training and testing paradigm in which linear combinations of terms can be constructed in a systematic way, with weights determined to fit the metric of interest. This metric may be the RMSD of fit between related ligand binding sites, or the RMSD of a ligand docking relative to its crystallographic binding mode, or the least-squares fit between predicted and experimentally characterized binding affinities. This procedure includes iterative repartitioning of the training set to assess the stability of terms' weights across different sets of proteins, followed by cross-validation of predictive accuracy on proteins not included in the training. Results will be presented for aligning and quantifying similarity between binding sites, and for improving ligand docking and ranking in virtual screening.|
Despite the good quality in uploaded protein structures, there are still uncertainties when it comes to defining an active site. The correct physico-chemical surrounding of a ligand can be crucial though in molecular docking and screening. E.g. defining metal atoms as pharmacophores, assigning an alternative amino acid and protonation state and the correct insertion and orientation of (displaceable) water molecules all play a vital role in the preparation of an active site for docking. Also, it might be necessary to scale the contribution of certain interactions to the overall score.
We show recent advances to prepare an active site for docking and screening and show a few proof-of-concepts.
|Successful drug discovery often requires optimization against a set of biological and physical properties. We describe our work on multi-parameter approaches to ligand-based de novo design and studies that demonstrate its ability to successfully generate lead hops or scaffold hops between known classes of ligands for some example receptors. Multiple design criteria, including pharmacophoric similarity, shape similarity, structural (fingerprint) similarity can be employed alongside various selectivity or ADME related properties (e.g. Lipinski properties, polar surface area, similarity to off-targets, etc..) to guide the evolution of structures which meet multiple design criteria.|
|A huge amount of effort has gone into the problem of predicting the binding affinity of given poses of hypothetical ligands docked to protein binding sites. However, if these hypothetical ligands have been produced by de novo design, an equally important consideration is whether they are synthetically accessible. Over the past decade, we have attempted to address this problem in a variety of ways. The CAESA program combines an empirical approach to molecular complexity with a relatively rapid retrosynthetic analysis to find starting materials, the hypothesis being that complexity contained within readily available starting materials is apparent rather than true complexity. An alternative approach, incorporated into the SPROUT program, analyses structural complexity by comparing substitution patterns of ligand structures with those found in known drugs and databases of commercially available starting materials. The relative merits of these approaches will be discussed.|
|The identification of chemical names in documents has provided platforms to enable structure-based searching of patents and mark-up chemistry publications. A natural extension is the ability to make chemistry articles, blog pages, wiki pages and other documents searchable by the extracted chemical structures. The ChemSpider database is built on a database of over 21 million unique chemical entities from close to 200 data sources and provides a rich resource of information for chemists. We will report on our efforts to integrate chemical name extraction with the ChemSpider platform to enable structure searching of Open Access chemistry articles, and online chemistry materials. We will unveil our online document markup platform for chemists to make both their open- and closed-access publications searchable by the language of chemistry – the structure.|
Experimental and Theoretical Data (Primary Data) constitute the backbone of research in chemistry. Primary data are recorded, analyzed and stored everyday in every chemistry laboratory. Typical primary data in chemistry are created
* Using the vast array of analytical techniques (GC, HPLC etc.)
* Employing spectroscopic methods (NMR, MS, UV/VIS, IR, X-Ray etc.)
* As a result of theoretical calculations (quantum mechanics, simulation of spectra etc.)
* Or by using the various high-throughput technologies in medicinal chemistry.
Efficient access to primary data is a prerequisite for successful chemical research. Chemists need access to their own data and to reference data from the chemical literature.
So far chemists have not developed a managed system of storing and publishing their primary data. Some journals offer the possibility to augment the publication with supplementary material. However, the accessibility of this material remains far from optimum.
The TIB is aiming to improve this situation. Since 2005 the TIB is recognized as the world's first registration agency for primary scientific data. One of the first scientific disciplines that systematically publish and care for their primary data are the geological sciences. These data –which
remain on the local servers-, receive from the TIB a permanent and individual Digital Object Identifier (DOI). Similar to a journal article this technology allows for easy, permanent and error-free reference and retrieval of the primary data.
One objective is to extend this technology to other scientific disciplines. In this context the TIB and Thieme Chemistry have started a collaboration to develop the technology, rules and procedures to publish chemical primary data with their own DOI.
The talk will present first results and invite the community for discussion and input.
Ontologies, formal computer-readable descriptions of the
objects of interest in a particular field, are widely used in molecular biology and, along with the InChI identifier, form the basis of the RSC's award-winning Project Prospect. Hitherto the approaches of formal ontology have not been applied to nanotechnology. In this talk we outline good practice in ontology development and describe our recent
successes in developing ontologies to represent nanoparticles themselves and the methods used to create them.
|The primary purpose of the Pistoia Alliance1 is to streamline non-competitive elements of the pharmaceutical drug discovery workflow by the specification of common business terms, relationships and processes. Every pharma company & software vendor is challenged by the technical interconversion, collation and interpretation of drug/agrochemical discovery data and as such, there is a vast amount of duplication, conversion and testing that could be reduced if a common foundation of data standards, ontologies and web-services could be promoted and ideally agreed within a nonproprietary and non-competitive framework. This would allow interoperability between a traditionally diverse set of technologies to benefit the healthcare sector. Through global collaboration, this pragmatic community will derive and instantiate and make available web-services for consumption by Academic institutions, Vendors and Companies under an Open Source framework. We will describe current progress, learnings and how companies, academics and others can participate in this approach.|
|DailyMed is a website hosted by the FDA providing access to information about marketed drugs. This information includes FDA approved labels (package inserts) and provides a standard, comprehensive, up-to-date, look-up and download resource of medication content and labeling as found in medication package inserts. With an intention of enhancing the dataset by making it searchable by chemical structure/substructure we determined that the data contained numerous chemistry errors. We have therefore used a combination of text-mining, automated and manual curation to improve the quality of the data set. In so doing we have also made querying of the data more flexible. Specifically we have used the Microsoft Sharepoint technology to create a portal allowing both text-based and structure-based querying. We will report on the advantages such an approach delivers in terms of flexible interrogation of DailyMed.|
|The course of green engineering was proposed as an elective signature in Universidad de los Andes. For de development of this course a planed activities was carried out. Three laboratories practice were done; synthesis of catalysts supports, elaboration of biodiesel and glycerin oxidation by heterogeneous catalysis with impregnated catalysts. In these practices the greenness of the processes were followed by measure of the material balance and waste generated and the EPI suite program was used to evaluate the environmental performance of the reactants intermediates and products substances. Other software available, in the EPA web page, was used too. The green chemistry web page at Scranton University was used to evaluate the engineering aspects of the different green topics. The bibliographic resources of Universidad de los Andes Library was used for more information about the green topics. Other web page was used for the student in the catalysts characterizations techniques, with good results|
Current efforts in Metabolomics, such as the Human Metabolome Project, collect structures of biological metabolites as well as data for their characterisation, such as spectra for identification of substances and measurements of their concentration. Still, only a fraction of existing metabolites and their spectral fingerprints are known. Computer-Assisted Structure Elucidation (CASE) of biological metabolites will be an important tool to leverage this lack of knowledge. Indispensable for CASE are modules to predict spectra for hypothetical structures.
This talk describes our experiments with different statistical and machine learning methods to perform predictions of proton NMR spectra based on data from our open database NMRShiftDB .
A mean absolute error of 0.18 ppm was achieved for the prediction of proton NMR shifts ranging from 0 to 11 ppm. Random forest, J48 decision tree and support vector machines achieved similar overall errors.
NMR prediction methods applied in the course of this work delivered precise predictions which can serve as a building block for Computer-Assisted Structure Elucidation for biological metabolites.
We will also elaborate on our first predictions and structure-recall experiments with large public databases and demonstrate how this can be useful in de-novo CASE contexts. All experiments describe in the course of this talk were performed based on our open source chemoinformatics library, the Chemistry Development Kit (CDK)  and open access data.
 Steinbeck, C., Krause, S. & Kuhn, S. NMRShiftDB - Constructing a free chemical information system with open-source components. J Chem Inf Comput Sci 43, 1733-1739 (2003).
 Steinbeck, C. et al. The chemistry development kit (cdk): an open-source java library for chemo- and bioinformatics. J Chem Inf Comput Sci 43, 493-500 (2003).
|Pharmacophore elucidation is a difficult problem involving the determination of the 3D description of ligand-protein interactions in the absence of the protein receptor. One of the reasons for lack of progress in the field is the lack of appropriate test data which hampers algorithm development and can lead to programs which perform well in the case of well-studied examples and poorly in unknown situations. We are developing a challenging set of test systems (ranging in size from 2 to 16 ligands), based on a study of the Astex cross-docking test set. Currently the pharmacophore test set contains ten systems. Previously we have developed a Multi-Objective Genetic Algorithm (Cottrell et al, JCAMD,20,735-749,2006). We describe the construction of the test sets and give results obtained by the MOGA on a selection of the test complexes, illustrating some of the problems posed by this challenging set.|
|Integrated visualization of least squares, partial least squares and robust regression quantitative-structure-property models enables rapid (1) identification of problems in modeled data and structures, (2) location and characterization of outliers, and (3) insights into model interpretation. This talk demonstrates how integrated visualization facilitates the creation of a QSPR model for surface tension from a data set of 399 measurements. Bad data, bad leverage points, bad structures, and inadequacies in descriptor space are rapidly identified and corrected.|
|Recent emphasis on the assessment of the true prediction scope of in-silico models allows us to define a chemical space where we can expect a model to perform within a given accuracy guideline. We can also capture internal statistics for individual models. Using these models, we can reevaluate any need for compound screening while, at the same time, allowing active learning for the model.
We present herein the status of our work towards automated compound submission and active learning. We introduce the concept of “automated submissions”, that is, a mechanism that uses in-silico models and sends only those compounds for screening which it cannot predict with a high level of confidence. This mechanism not only decreases the number of compounds being screened but, also, allows a model to iteratively expand its chemical space where it has limited prediction scope.
We believe that there are several practical applications of this concept. For example, the model can choose compounds outside of the training sets' chemical space to send for screening and, thus, increase chemical space coverage over time. This process delivers significant cost savings.
|The long term goal of this project is to develop a computerized system with problem-solving capabilities in synthetic organic chemistry comparable to those of a human expert. At the core of such a system should be the ability to predict the course of chemical reactions to, for instance, validate synthesis plans. Our first approach, based on encoding expert knowledge as transformation rules, achieves predictive power competitive with chemistry graduate students, but requires significant knowledge engineering to expand its coverage to new reactivity. To overcome this limitation and achieve greater predictive power, our current approach is not based on specific rules, but instead upon general principles of physical organic chemistry. These principles allow the system to elucidate the mechanistic pathways and reaction coordinate energy diagrams of simulated reactions. These results directly mimic the qualitative problem-solving ability of human experts, but with the speed, precision, and combinatorial power of an automated system.|
|Small molecules can be used as combinatorial building blocks for chemical synthesis, as probes for analyzing biological systems, and for the discovery of drugs and other useful compounds. Large repositories containing millions of small molecules have recently become publicly available. The tools to search these repositories, however, lack the statistical precision and effectiveness of comparable tools developed to search repositories of biological sequences, such as BLAST. A fundamental bottleneck is that the theory of the distribution and statistical significance of chemical similarity scores has not yet been developed. Here we remove this bottleneck by developing: (1) chance models of molecular fingerprints; (2) accurate approximations to the similarity score distribution; (3) accurate approximations to the extreme value distribution of similarity scores; (4) z-scores and e-values (p-values) to measure the statistical significance of a chemical similarity scores. The approach is validated in several projects, including finding new drug leads against important targets.|
|The interactions of alkali metal cations (Li+, Na+ and K+) with the cup-shaped molecules – tris(bicyclo[2.2.1]hepteno)benzene and tris(7-azabicyclo[2.2.1]hepteno)benzene have been investigated using MP2(FULL)/6-311+G(d,p)//MP2/6-31G(d) level of theory. The geometries and interaction energies are compared with the metal ion bound complexes of trindene, benzotripyrrole and benzene. The cup-shaped molecules exhibit two faces or cavities (top and bottom). The cavity selectivity of the cup-shaped molecules by alkali metal ions is discussed. As evidence obtained from the values of pyramidalization angles, the host molecule becomes deeper bowl when the lone pair of electrons of nitrogen atoms participates in binding with cations. Molecular electrostatic potential surfaces nicely explain the cavity selectivity in the cup-shaped systems and the variation of interaction energies for different ligands. Vibrational frequency analysis is useful in characterizing different metal ion complexes and to distinguish top and bottom face complexes of metal ions with the cup-shaped molecules.|
|The NSF-sponsored LANGURE (Land Grant Universities Research Ethics) project has involved scientists across a variety of disciplines working with ethicists and philosophers to develop an on-line course in research ethics for graduate students. The LANGURE website defines plagiarism as “... representing the ideas and/or writing of another as one's own. This includes using another author's sentences or paragraphs, or significant portions thereof, without quotation marks and an appropriate citation, and conveying information that is not commonly known without citing the authors whose original discovery or insight that information represents.” This talk will present an exercise developed for the LANGURE project that the author has successfully used for several years to help graduate students understand what is (and what is not) plagiarism.|
|One of the more challenging aspects of teaching is the construction of tests. Due to this challenge, there are a number of test bank resources available to instructors, often associated with a textbook. Perhaps with the abundance of such resources, it may not be surprising that confusion exists about when it is legitimate for an instructor to copy a test item and when such an act is a violation of copyright law. In the case of ACS Exams, because they are secure test instruments, the specifics of copyright are slightly different than other educational resources - and the implications of violations of copyright more dramatic. This talk will provide an overview of the nature of secure test copyright and include examples of instructor plagiarism of ACS Exams and how these incidents are handled by the Exams Institute.|
|As part of our efforts in the professional development of practicing chemists, we recently completed a qualitative study of 8 senior chemistry graduate students. The focus of this research was to probe how chemistry graduate students learn scientific norms and how they make ethics-based decisions. A major finding of this research was that writing scholarly manuscripts was the activity during which these students were most aware of ethical issues in science. The writing process forced students to develop personal definitions of plagiarism, which, in turn, helped them to reflect on how credit is (or should be) given in science. In addition to presenting the results of this study, this talk will also describe how considerations regarding plagiarism promote development of the students' personal epistemologies of science.|
|Plagiarism is a violation of academic integrity that occurs in all sectors of campus, but how prevalent is it in the sciences? Are there distinctions to be made between plagiarism and inappropriate collaborations? Can we control our teaching environments to make them less conducive to the threat of plagiarism? In what ways can policy, interpretation, and enforcement promote better learning with less cheating? An Associate Dean in the Office of Undergraduate Studies will present these issues and various case studies relevant to ethical academic behaviors in the science curriculum. A new procedural device developed to encourage interaction between instructors and students accused of plagiarism will be discussed.|
|Ethics courses are not totally devoid from the undergraduate curriculum, the subject can be found in most introductory philosophy courses. An undergraduate course devoted to exploring and discussing the ethical issues facing scientists and the scientific research community, however is seldom taught at the undergraduate level. As science pushes the boundaries of life, by exploring its nuances and investigating its beginnings, the need for discussing and understanding the ethical ramifications of these explorations is paramount to the scientific community and society at large. Science Ethics & Morals – Chemistry 2600, is a two credit undergraduate course offered at Armstrong Atlantic University that investigates and discusses ethical issues and concerns confronting the scientific community today as well as those in the past. Topics discussed in Chemistry 2600 range from plagiarism/cheating in the science classroom and research labs to animal rights. This presentation will discuss the course's pedagogy, pre requisites, student evaluations and problems faced when teaching an ethics course at the undergraduate level.|
|Plagiarism on many levels is not getting any easier to solve. The internet causes students to pick the correct answers too easily. Is it wrong to have the correct answer? This is a result of correctness against the accompanying stress that students feel. None of us need to look for the correct answer desperately or at any price. The price of losing great students is unacceptable but should not lead to softness on direct copying. This softness goes against real learning. We must use new technology detection and enforcement and constantly remind students that plagiarism cannot be tolerated. I will discuss some of the new directions that plagiarism has taken and how we will learn to correct our problem. The future should yield a better academic direction and professors in science must take the lead.|
|The intended use of literature review as writing to learn is often sidetracked by problems with intentional and unintentional plagiarism. The use of peer review with the aid of rubrics and examples alleviates most difficulties with paraphrasing and documentation in scientific writing, placing the responsibility for avoiding plagiarism as well as for learning in the students' hands. Presenter will share methods, rubrics, and example student texts from a recent biochemistry course.|
|Plagiarism is a topic that is discussed in schools in the United States beginning in the upper elementary grades through high school. It is also a topic that is routinely discussed in undergraduate and graduate courses. Despite this apparent emphasis, however, plagiarism remains one of the top research misconduct issues with graduate students and with proposals submitted to federal funding agencies such as the NSF and NIH. This suggests we are perhaps not really teaching students what plagiarism really is or how to recognize it in their own work. Over the past decade, we have developed responsible conduct in research (RCR) classes and workshops at both the graduate and undergraduate levels, wherein we have attempted to directly address the issue of teaching students what plagiarism is via examples and extended discussions. Specific examples and methodologies used will be presented along with preliminary data on student conceptions and misconceptions regarding plagiarism.|
|Jmol Protein Explorer, http://Jmol.ProteinExplorer.org, is a web application that uses the signed Jmol applet (http://Jmol.sourceforge.net) to enable exploration of biomolecular structures from a user's local or network drive, from the Protein Data Bank (http://www.rcsb.org), or from any other available web site. Based on the widely used Protein Explorer for the Chime plug-in, Jmol Protein Explorer adds several new features, including the capability of displaying and working with PDB "biomolecules", the display of 3D Ramachandran plots, visualization of amino acid residues and nucleic acid base absolute and relative orientation using quaternion maps, the ability to save the current state to the local drive, and the capability to send the current view as a 3D Jmol model to oneself or a colleague via E-mail. This presentation will focus on some of the more unusual capabilities of Jmol Protein Explorer, highlighting ways in which they can be used in a classroom or laboratory context.|
|The advent of open access chemical databases and open source cheminformatics software packages has created new opportunities for both teaching and learning chemistry. Modern visualization software adds the experience of dynamics and the overlay of colour-coded properties with molecular shape information. Jmol offers such powerful visualization for free with millions of compounds available from PubChem to play with. Open-access databases in chemistry remove the requirement for expensive licenses for commercial chemistry databases to train students in structure and similarity searching. Last but not least, we argue that programming with a cheminformatics library on the source code level will lead to a deeper insight into structural chemistry than the pure text book experience. This talk will try to assess the current state of open access databases and open source software in chemistry and will point out how these resources may be used for educational purposes.|
|Computational chemistry increasingly pervades the taught chemistry curriculum. Historically, it has been appended to regular laboratory exercises associated with e.g. organic/inorganic/physical courses. This year, we have introduced a computational chemistry laboratory in an integrated form covering many topics (including a novel spectral prediction module), presenting it as a Wiki, and providing laptops to each student with all the required software on a readily maintainable image. We chose this approach for several reasons; the Wiki is a read/write environment not only for the course team, but for the students. It also allows 3D molecular models to be integrated using Jmol (which also supports interesting isosurfaces such as MOs, MEPs, rho(r), ELF etc), and finally because the students can have write access to most parts of the course, and particularly to the discussion areas. The course itself and associated discussion is visible at http://www.ch.ic.ac.uk/wiki/|
|Online educational resources and tools have given extraordinary opportunities to create learner- centered teaching in our courses. Millennials have come of age believing that these tools are part of their everyday world. To engage this new generation in our chemistry courses, several easy to use online tools and gadgets that require little to no training and investment by instructors will be demonstrated. The advantages and possibilities of enhancing student learning using these tools will be highlighted. This presentation will focus on using Wikis, Power Point Narration and Flip Cameras in teaching a large freshman chemistry class. Wikis are web pages that can be accessed, edited and improved by multiple users with a web browser and internet access. In this presentation, strategies used in enhancing students' laboratory report writing skills will be demonstrated. Use of power point narration and flip cameras to advance learning outside of the classroom will also be demonstrated.|
University of Colorado's PhET project is an ongoing effort to provide an extensive suite of simulations for teaching and learning science and to make these resources both freely available from the PhET website (http://phet.colorado.edu) and easy to incorporate into classrooms. PhET has already developed over 17 simulations focusing on chemistry topics such as equilibrium, solubility, reaction coordinates, pH, atomic structure, and atomic energy levels. Our simulations are animated, interactive, and game-like environments in which students learn through exploration. In these simulations, we emphasize the connections between real life phenomena and the underlying science and seek to make the visual and conceptual models that expert scientists use accessible to students.
Here, we will introduce several PhET simulations and will highlight important results from our active research program that guide both the design and use of simulations to effectively enhance student learning and engagement.
|Chemical Education Digital Library (ChemEd DL) provides an online repository for chemical education related digital resources we call ChemEd Content. ChemEd DL, a Pathway project of the National Science Digital Library (NSDL), extends beyond the cataloging of resources to provide content management for the resources themselves. In providing the facility to manage the content, ChemEd DL offers collaborative working spaces to develop content, version control, and community discussion. As part of the repository, resources can be more intimately connected with one another around a specific topic of chemistry education. For example, a contributed resource is immediately linked to Journal of Chemical Education articles, video clips, and textbook tables of contents, which can help to explain the chemistry behind the resource. Through its repository, ChemEd DL allows teachers and students to discover digital materials that augment their learning of chemistry.|
|College Mentors for Kids is an innovative non-profit that pairs children with local college student mentors for weekly activities that expose youth to the opportunities of higher education. The program is based in Indiana and serves over 1,000 children with over 1,000 college mentors and 150 student leaders. The mission of College Mentors is to motivate youth and communities to achieve their potential by fostering inspiration to transform lives, education to change attitudes, and connections to increase opportunities. This presentation reports the ongoing efforts of Wabash College students and faculty in developing exciting and meaningful chemistry activities for the College Mentors for Kids program. These events include chemistry demonstration presentations, hands-on activities, and discussions with professional chemists. Activity planning, execution, and assessment will be discussed.|
|If we, as chemists, do not articulate the benefit and utility of chemistry in people's daily lives, who else will embrace this task? In answer to this fundamental challenge, I have designed three courses, directed at non-science majors, over the past decade which place chemistry in a context to allow students to understand and appreciate the utility of chemistry in their daily lives. This has been an incredibly valuable tool in terms of “public outreach”. Students in these courses have majors which may impact the general public's perception of chemistry in the future including, but not limited to education, political science, occupational therapy, and biology. The design and execution of these courses (The Biochemistry of Working Out; Exploring the Science of Addiction; and The Chemistry and Politics of Cancer and AIDS) will be presented.|
|Local television public access stations welcome new presenters, and science programming is especially desirable. Contacting a local station to request a public service announcement for National Chemistry Week, this college professor, totally inexperienced in media, found herself first interviewed on a news show and then co-producing an award-winning chemistry experiment show featuring her students. The show airs frequently after school and in the evening, and we are currently producing a new show for preschoolers. Here's a look at the final product and a behind-the-scenes account of the bloopers, editing, and technical wizardry you don't see.|
|During the 1940 – 1970's, uranium mining took place on the Navajo reservation, resulting in hundreds of abandoned mines and areas of mine waste. The issue with past mine activities continues to be a problem for the people who live near the abandoned mines on the Navajo Reservation. It has been shown by the Army Corps of Engineers, our laboratory, and others that several unregulated water wells on the Navajo Reservation have elevated levels of uranium in the water. In addition to the water, soil, plants, and livestock have shown elevated uranium. The approach is have Navajo student and faculty researchers work directly with the Navajo communities to identify areas of interest as well as in collection of the samples. The information learned from these studies is reported back to the affected Chapters.|
|For more than 30 years, this author has presented both programs and hands-on workshops in schools, museums, public libraries, hotels, restaurants, fire houses, parks, and television studios to various audiences around the world. Using varied formats, major topics have included “Chemistry in the Toy Store”, “The Science of Soap Bubbles”, “Polymers”, “Magic into Science”, and “Cooking With Chemistry”. These presentations have resulted in the development of new activities and popularization of others, many of which are utilized in science education throughout the world today.|
|In order to improve students' preparation for Organic Chemistry lab, a series of online tutorials were created using Adobe Presenter. Each tutorial focuses on a lab technique (distillation, extraction, TLC, crystallization and melting point). The theory and background of each technique is presented, along with streaming video demonstrations. The tutorials have been found to effectively prepare students for lab and the student response has been very favorable. http://www.csupomona.edu/~lsstarkey/ochemlab|
The structural identification of organic compounds uses a combination of spectroscopic techniques. Even an introductory lecture course on this topic could easily encompass 10 lectures or a whole semester. It is also just as easily forgotten and the material needs to be revised by students regularly. We present our experience with short (8 - 15 minutes) videos designed to introduce students to the concepts of fragmentation pattern in mass spectrometry, as well as the analysis of infrared, ultraviolet-visible and 1H NMR spectra. In addition to a fast pace and emphasis on key features, we avoided the normal "Powerpoint" lecture format by presenting the material on a whiteboard using simple visual effects. Students can either stream videos on the web or download them for viewing with an mp4 player. Advantages and disadvantages of this approach, and the impact on student understanding and comprehension will be discussed.|
|Our group has developed the WE_LEARN system for organic chemistry to be a “Practice Makes Perfect” system. A basic assumption of this system, and of many teachers/professors, is that more time on task translates to better mastery of the subject matter. In this presentation, this assumption is challenged by analyzing over 1,000,000 student attempts on assignments which have been collected over several years of usage in order to evaluate the question of whether a correlation between the amount of time on the subject matter and the course mastery (e.g., final grade, final exam mark) does, indeed, exist.|
|Synthesis Explorer is an interactive tutorial system for organic chemistry that can generate a virtually limitless number of multi-step synthesis design and reaction mechanism problems with support for inquiry-based learning. This electronic tutor is powered by an underlying reaction expert system, comprising over 80 reagent models and 1,500 manually-curated reaction pattern rules, giving it inherent predictive power spanning the undergraduate organic chemistry curriculum. By mapping the relationships between these rules into a hierarchical subject dependency graph, the system can automatically assess the student's current knowledge state. This in turn enables the system to dynamically adapt to the student's knowledge. By generating personalized problems of appropriate difficulty that specifically target material at the boundary of a student's current knowledge the system can optimize learning trajectories. Pedagogical experiments in undergraduate classes indicate that the system can improve average student examination performance by ~10%. The system is accessible at http://cdb.ics.uci.edu.|
The ACS Green Chemistry Institute® has developed two online tools that can be used for teaching green chemistry. The Green Chemistry Resource Exchange (www.greenchemex.org) is a database of green chemistry technologies and information resources. This tool can be helpful for bringing green chemistry examples into the classroom. The database is searchable so the instructor can easily find appropriate academic and industrial examples of green chemistry that tie into the topic that the class is covering.
The second online tool, the National Environmental Methods Index (NEMI; www.nemi.gov), is a database that can be used for bringing green chemistry concepts into teaching analytical chemistry and environmental chemistry testing methodologies. In the database of analytical methods, many of the testing methods have been evaluated and assigned greenness profiles. The greenness profiles consider four criteria: if a chemical used in the method is persistent, bioaccumulative, and toxic (PBT); if a chemical used in the method is hazardous; if the pH is greater than 2 or less than 12; and if the amount of waste generated is greater than 50g. This online tool can be useful for comparing the greenness of methods and teaching about green analytical chemistry.
|GEMs is an interactive, web-based database of Greener Education Materials for Chemists. The database is designed to be a comprehensive resource of educational materials including laboratory exercises, lecture materials, course syllabi and multimedia content that illustrate chemical concepts important for green chemistry. GEMs has become a focal point for facilitating a community-based approach to curriculum development. Because green chemistry represents a principle-based approach to the design and manufacture of chemical products and processes, it is providing educators with a uniquely flexible and interdisciplinary framework for the development of new education materials. This paper will describe two new features of the database that provide additional resources (e.g., ideas for implementation, assessment materials, and links to related resources) and a forum for capturing and sharing educators experiences and recommendations regarding adoption of these materials. The URL for the GEMs database is http://greenchem.uoregon.edu/gems.html.|
|For sometime now we have been working on a project to design and provide visually rich and network-delivered general chemistry content. As this project has progressed, we have arrived at a format that allows for the presentation of a talking head video simultaneously accompanied by visual media such as, virtual blackboard presentations, photos, video clips, and animations all designed to support and illustrate the material. In addition, we have created inline self-assessment quizzes and problem solving video tutorials as learning aids. This format provides the student with a concurrent presentation of visual elements and oral delivery of the content as an alternative to the tradition printed textbook. The content and features of a representative sample of these materials will be presented and a brief discussion of their potential impact on teaching and learning.|
|As both the costs of textbooks and student interest in computer-based instruction rise, it seems to make sense to provide low-cost, internet-based textbooks. This presentation describes one such text, An Introduction to Chemistry by Mark Bishop, found at preparatorychemistry.com. There will be a description of the novel approach to pricing this text so that the student cost can vary from free to $20 to $79.95, depending on students' financial needs and whether the student wants the online version or a printed text. The various components of the web-text will be described - PDF files of the text and study guide, flash-based audio presentations, animations, tutorials, glossary quizzes, concept maps, jmol structures, chapter checklists, and more. There will also be a description of the tools necessary to create your own such text.|
|In the second year of the program, our methodology seeks to utilize screen capture software to create Video-based Additional Instruction (VAI) for General Chemistry in order to foster problem solving skills and conceptual understanding. The supplemental resource was linked to an online syllabus which allowed students to seek or pull content as needed. Using a log-in based system, we are able to quantify individual usage of particular materials and correlate that usage with student performance on related questions on course-wide graded events. This research also explores patterns of usage, whether students access the material before or after the scheduled lesson date (as preparation or review) and when they access it with respect to the administration of graded events and correlates the relative success of each habit. Additionally, this work analyzes student feedback who either outperformed or underperformed with respect to their anticipated scores, correlating that performance with VAI usage.|
|This presentation will review the creation and preliminary testing of an assignable, fully integrated online textbook and homework system for general chemistry. This project is an extension of the OWL electronic learning system and involves the blending of text, problem-based homework, and interactive modules. While the organization of the material is traditional in order and scope, the presentation intermixes noninteractive material such as static explanations, video examples, and whiteboard problem solutions with interactive and assignable figure-based exercises, concept simulations, tutorials and problem-based homework. The principal goal of the project is to create a system in which the students experience “text” and assignable homework as an integrated whole. Results from preliminary tests with two classes will be presented, highlighting how students navigate the system, which parts they do and do not use, and how assignability influences their decisions as to how to use the system.|
|This paper will present a set of online instructional materials that are designed for use in discipline-specific courses, yet help students to draw connections between disciplines. The initial target courses include chemistry, materials science and biology. The disciplines share goals related to molecular science and, although the focus and details may differ, "recurring patterns" appear in the explanatory frameworks and tools employed in each of the disciplines. The goal of our instructional materials is to help make these recurring patterns explicit for students, such that they can integrate the ideas across disciplines and construct a coherent and robust set of knowledge. The materials we have developed to date are related to the use of free energy landscapes to understand the effects of temperature on molecular processes. The materials are housed in the Materials Digital Library (www.matdl.org).|
|This presentation provides a discussion of how statistical learning and data mining techniques can be used to analyze crystallographic patterns in nanostructures. We show how by integrating electronic and crystal geometry information into both classification and predictive data mining techniques, one can extract complex rule based design strategies for materials; and specifically nanomaterials. In this presentation we also discuss how statistical learning techniques can be used to augment more classical approaches to computational based design of materials. The role of data mining to identify dominant parameters influencing phase stability calculations is demonstrated. The use of such informatics based techniques to accelerate the computational approaches for first principle calculations is also discussed|
|The explosion of computational and experimental methods able to glean detailed information about atomic and molecular structure, combined with control over organization at the nanoscale has led to an exponential increase in the complexity of the information space. These exciting developments have the potential to lead to true materials design. Using polymer nanocomposites as a model system, we are using informatics to develop a set of design rules based on a fundamental understanding of the filler/matrix interface enthalpy and entropy, the polymer structure and dynamics in the interfacial region, and the assembly or aggregation of nanofillers. In order to bridge the length-scale and time-scale gaps, we combine first principles calculations with heuristics and analytical modeling to predict the thermomechanical behavior of polymer nanocomposites. Experimental data is mined from the available literature for thermomechanical property changes as a function of constituent phases, and as available, nanoparticle aggregation.|
|Intrinsic carbon nanotubes (CNTs) have extraordinary mechanical properties yet there is considerable variation in experimental measurements. During growth and post-growth stages, defects can be inadvertently or intentionally added to the CNTs. It is believed that the variation of the mechanical properties is due in part to the presence of these defects as well as other heterogeneities. Via a methodical exploration of the potential parameter space utilizing an MD level simulation for data generation, we will investigate the feasibility of deriving a quantitative structure property relationship (QSPR) between the structural features and mechanical properties of CNTs. We will evaluate the data set to derive an appropriate descriptor set, investigate a variety of linear and nonlinear methods to build the QSPR, exercise model validation and define a domain of applicability. The potential utilization of a QSPR will provide more visibility into the mechanical property space without having to execute lengthy MD calculations.|
|Carbon nanotubes (CNTs) are being used in fiber - reinforced composites to increase mechanical properties such as modulus and stiffness. Multi-scale CNT composites can be analyzed using a simple methodology which combines analytical and finite element modeling. First, the enhanced matrix orthotropic elastic properties are computed by treating the CNTs as aligned inclusions of known dimensions and mechanical properties in the matrix. The shear strength of this enhanced matrix is computed from a shear lag model in which the interfacial strength of the CNT to matrix is specified. As needed, these orthotropic properties can also be reduced to equivalent isotropic properties computed from a quasi-isotropic laminate lay-up in which the individual plies are given the orthotropic properties. Next, and in either case, a progressive failure analysis is used to characterize the multi-scale structural properties. A finite element (f/e) model with progressive failure analysis of a three-point bend test was used to compute the interlaminar shear strength (ILSS) for a nanocomposite consisting of CNTs in a fiber-reinforced composite. The ILSS was computed versus CNT loading and interfacial strength (IFS): At 5% CNT loading, increasing the IFS by a factor of 4 increases the ILSS by a factor of 8.5, and at 10% CNT loading the ILSS increases by a factor of 11 for an increase in IFS of 4. And at 5% and 10% CNT loadings, the ILSS is a factor of 10 and 15 larger respectively than the matrix with no CNTs. These calculations show a substantial increase in ILSS as the CNT loading and IFS increase.|
|In this study initially molecular models of of epoxy-based cross linked polymers are built and investigated by molecular dynamics simulation. Properties like density, solubility parameters and elastic moduli are computed and compared against experimental results and also an idealized linear epoxy system. Then a mixed system of single walled carbon nanotubes and the polymer matrix is investigated, analyzing the impact of the CNT filler on the properties of the system. Finally the results from the atomistic simulations are used to parameterize a mesoscale Dissipative Particle Dynamics simulation (DPD) to investigate the impact of cross linking and chemical detail on the distribution of the carbon nanotubes in the polymer matrix.|
|Nano-particle's capability to improve neat resin properties is limited by the interfacial strength of the bond between the nano-particles and the resin material. Due to the nano-scale size, it is extremely difficult to conduct experiments (pull-out test) and to quantify (measure) the interfacial strength of the bond without having scatter in test data. In this paper, a high fidelity procedure that combines Progressive Failure Analysis (PFA) and Finite Element Method (FEM) is used. First, a CNT pull-out test is simulated using combined PFA and FEM approach and calibrated with limited average test data available in literature. Second, the combined PFA and FE approach is further integrated with probabilistic analysis to virtually simulate the scatter in the test data. The scatter in the simulated data comes from introducing variation in the aspect ratio (length/diameter) of the CNT, strength of the interface, matrix and the CNT. Using the proposed approach, a good correlation between the simulation and experimental data is established.|
|The multiscale modeling program at Lockheed Martin, with its academic and commercial collborators, emphasizes a value proposition based on performing factors of 100 or more fewer physical experiments, knowing and controlling the most powerful interface in the materials or devices under development, formulating new functionality from first principles, accelerated materials or device design convergence, and greatly reduced materials development, engineering, and integration costs. The program strives to exploit physics based models, component analyses, and design tools in conjunction with materials informatics to efficiently and rationally navigate the vast landscape between atomistic and bulk component length scales, and is a central effort that joins target materials and device development programs. In this talk we will describe the strategies and approaches toward bridging the modeling – experiment gap, define the key challenges, problems, and opportunities in multiscale modeling of nanoscale materials, and elaborate on our implementation of materials informatics to address these challenges.|
The ability to predict and control the density, position and orientation of nanoparticles in complex fluids and polymer matrices has far reaching applications in nanoscience and technology. For example, Whitesides has recently demonstrated how complex systems and devices may be “self-assembled” by control of particle geometry, concentration and surface chemistry. Such methods may offer an extremely attractive route in manufacturing in that nanodevices might be assembled rapidly and cheaply using only wet chemistry techniques. However, in such systems it is by no means obvious how entropic and enthalpic interactions couple with a surface potential or external field to minimize a local free energy to produce a desired structure. It is one thing to observe an interesting nanostructure in the laboratory but it is quite another to understand the intermolecular and nanoparticulate forces with sufficient fidelity to design and predict the properties, phase behavior, and long term stability of this class of matter.
In this paper we report on the development of a new finite granular dynamics computer simulation technique that solves the equations of motion for systems of interacting nanoparticles of arbitrary size and shape. Phase diagrams and transport properties for mixtures of spheres and triangles on a two-dimensional substrate are presented. These systems form glasses readily and issues regarding thermodynamic stability in the mesoscopic regime will be discussed.
|Nanocomposites have multiscaled features that can impact the macroscaled mechanical properties. The multiscaled features have length scales that can vary from angstroms to millimeters. They include the morphology, the interfaces, and the interphase. The morphology is described by the distribution of the multiwalled carbon nanotubes (MWCNT) in the polymer matrix, the length and the diameter of the MWCNT, and the waviness and the mechanical entanglement characteristics of the MWCNT forest. Image processing of Scanning Electron Microscopy (SEM) and Transmission Electron Microscopy (TEM) images can be used to describe the morphology of the nanocomposite. The interfaces include the MWCNT to the polymer interface and the MWCNT to the fiber interface. Techniques such as Micro Raman, TEM of fractured surfaces and Near Field Raman can provide information about the interfacial shear stress associated with these interfaces. The interphase is the region in the polymer in which the motion of polymer chains is constrained by the interface with the MWCNT or the fiber. One consequence is that the viscoelastic properties of the interphase are different from that of the bulk polymer. The change in the viscoelastic properties due to the interphase can be explored using Dynamic Mechanical Anaylsis (DMA). Information of the multiscaled features associated with nanocomposites can be used to build more accurate physics based multiscaled models and heuristics model.|
|Carbon nanotubes (CNTs) have shown superior mechanical properties over the industry leading graphite fiber and experimentalists are getting closer to harnessing their full potential in composites. Estimation of mechanical properties would expedite the manufacturing optimization of nanocomposites. Traditional continuum modeling has overestimated mechanical properties of nanocomposites. The need for a multiscale or atomically informed continuum model is apparent. In this paper, we investigate a particular graphite epoxy laminate enhanced with CNTs synthesized from catalyst nanoparticles. We develop a modeling methodology that includes micromechanical parameters and atomistic information. Through this multiscale study, we will identify the critical modeling parameters necessary to incorporate into a continuum level finite element model. This model can be used to guide the optimization of nanocomposites.|
|The CULGI multiscale modeling library integrates a wide range of simulation techniques including atomistic molecular dynamics, both particle-based and field-based mesoscopic methods, novel hybrid particle-field methods and forward and backward mappers. We discuss ongoing work in applying multiscale modeling to typical industrial polymer nanophase materials, including: the dynamics of morphology formation in heterodisperse polymer blends, the rheology modeling of branched polymer distributions, prospects for the rational design of nanocomposite materials, the calculation of cohesive energy densities, and the modeling of polymer surfaces and surface energies. Such scientific development is a challenge, not only since the necessary theory and software is tough to make, but also since one changes language and wording, from forcefields to finite elements, from chemist to engineer, from fundamental science to everyday practical science.|
|The mechanical properties of nanocomposite materials are critically controlled by the failure initiation mechanisms at the interfaces between matrix and embedded fibers. In this paper, we investigate the detailed structural and mechanical properties of the interfaces between polymer and carbon fiber in the presence of carbon nanotubes (CNTs) grown from catalyst nanoparticles attached on the carbon fiber surface. Through a systematic multi-scale modeling study, we will investigate the detailed mechanisms controlling the critical failure of load transfer at the interface. Molecular dynamics (MD) simulations using the modified embedded atom method (MEAM) potential plays the central role of tracking detailed atomic structure evolution under the external loading conditions determined by continuum level analysis. We will report the MD study of CNTs and the Ni nanoparticle-CNT interface under diverse loading conditions. The findings of the multi-scale study will provide useful guidance to develop optimization strategy for the CNT reinforced polymer composite materials.|
|Evaluation of various biological effects of Manufactured Nanoparticles (MNPs) is of critical importance for nanotechnology. Experimental studies (especially, toxicological) are time-consuming, costly, and impractical calling for the development of in silico approaches. We have begun to develop Quantitative Nanostructure – Activity Relationships models where physical/chemical/geometrical properties of the MNPs such as composition, size, shape, aspect ratio, surface area, chemistry/morphology, zeta potential, chemical reactivity, etc. are used as MNPs' descriptors. Using data recently obtained from in-vitro cell viability assays (PNAS, 2008, 105, pp 7387-7392; Nat. Biotechnol., 2005, 23, pp 1418-1423) we have developed SVM-based classification and kNN-based regression models with strong external predictive power. Similar to conventional applications of QSAR modelling for the analysis of organic biomolecular datasets, these models can be used to predict activity profiles of newly designed nanomaterials and bias the design and manufacturing towards better and safer products.|
|Using nanomaterials for improving drug delivery systems is a new and exciting field of scientific study. Many fundamental issues remain unsolved, with one focus centered on excipient formulation performance. Here, QSAR analysis was applied to data generated from a systematic evaluation of nanoparticle formulation performance for several saccharide-based polymers (excipients) and drug-like molecules. The ability of a drug/polymer mixture to form quality nanoparticle suspensions in an aqueous solution can be measured by observing the behavior of the system over time. The resultant formulation can be classified, e.g., as good, fair or poor. A mathematical link between drug/polymer structures and performance classification has been developed. Random forest (RF) models reveal that the descriptors appearing to be of high influence are largely polymer based. This implies that polymer characteristics are the main driver of formulation performance. Such models can be used to predict the performance of new polymers in future drug formulations.|
|Possible sources of cellular toxicity due to the insertion of a carbon nanotube into a dimyristoylphosphatidylcholine (DMPC) membrane bilayer were explored using the membrane-interaction (MI-) QSAR methodology. Two large changes in the bilayer occur due to insertion of the carbon nanotube. First, there is an alteration in the packing of the DMPC bilayer molecules which extends at least 18 Å from the nanotube, and includes the creation of a relatively open, unoccupied cylindrical ring of 2 to 4 Å thickness directly around the nanotube. Secondly, the same bilayer structure which undergoes the change in structural organization also becomes much more rigid than when the nanotube is not inserted. Next, the affinities, expressed by log kb values, of 23 biologically active molecules to a carbon nanotube were estimated by molecular dynamics simulation, and then compared to the observed and estimated binding affinities of eight ligands to human serum albumin, HSA. The range of log kb values over the set of nanotube ligands is 0.25 to 7.14. Some ligands, like PGI2, bind in the log kb = 7 range which corresponds to the lower limit of known drugs. Such significant levels of binding of biologically relevant compounds to carbon nanotubes could lead to alterations in the normal pharmacodynamic profiles of these high affinity compounds and be a source of toxicity.|
|Two- to six-bladed molecular turbines were designed and modeled in the computer. The structures are based on 10- and 12-vertex carboranes (C2B8, C2B10, CB11-) and mounted on molecular grids or in metallo-organic frameworks. Newton's laws and Universal Force Field were used to study the response of molecular turbines to external flows and electric fields. Simple properties such as rotation barriers, friction and turbine efficiencies were extracted from the simulations. The results suggest that for turbines with more than three blades the efficiency decreases with an increasing number of blades.|
|Noble metal nanoparticles have been employed as biolabels for many years and have potential applications in sensing and photonics. However, numerous aspects of these systems remain unclear including the origins of their optical absorption spectra, ligand exchange reactions, and growth mechanisms. Recent crystal structure determination of small gold nanoparticles is currently enabling in-depth research into the properties and reactivity of these systems.
Small (< 2 nm) nanoparticles display multiple peaks in their optical absorption spectra rather than the strong plasmon resonance peak of larger nanoparticles. This characteristic is likely due in part to the structure of these systems. In this work, time-dependent density functional theory (TDDFT) is employed to calculate the optical absorption of the anionic Au25 nanoparticle and its silver and mixed metal analogs. The level of theory required to accurately compute the core structure and optical absorption spectrum of these systems is discussed. Precise core geometries are required in order to obtain good predictions for the splitting between the first two spectral peaks. The model potential used to compute the excitation spectrum is critical, but solvent effects play a relatively minor role.
The crystal structure of the neutral Au25 nanoparticle has also been solved recently, and experimental EPR data shows that the structure has a single unpaired electron. Density functional theory calculations predict the g tensor and hyperfine coupling elements in good agreement with experiment, and enable explanation of the axial nature of the EPR data.
|Understanding the nucleation and growth is of key importance for many applications e.g. for metal nanoparticles and catalysts. In particular, it is crucial to control the morphology as well as the structure of the crystallites formed during the crystallization process. When and how the selection of a specific structure (or polymorph) occurs remains a long-standing issue. This is a very complex problem, resulting from a subtle interplay between thermodynamics and kinetics. Solving this issue has remained elusive so far, even on simple model systems composed of spherical particles. In this talk, we use molecular simulations to understand the molecular mechanisms underlying the formation of metal and semi-conductor nanoparticles. Using accurate many-body potential to model our systems, we carry out two different types of molecular simulations corresponding to the two steps of nucleation and growth. We first examine the formation of a nucleus of a critical size, which is an activated process, and therefore requires the use of sampling methods suited to study rare events. We then carefully study the subsequent evolution of the post-critical nucleus, both in terms of size and structure. Our simulation results shed light on the molecular mechanisms underlying the structure selection process during the crystallization process.|
|So far, there are not yet suitable methods for investigating the dynamic process of iron nanoparticle formation of ion atoms in liquid phase. In the present study, the Dissipative Particle Dynamics (DPD) method was employed to simulate the ion nanoparticles formation process of ion atoms in hexadecane solvent and in the presence of stabilizers. The initial state of iron nanoparticle formation was defined as the disorder situation of iron atoms produced by hydrogenation of acetylacetonate ion. It was found that the repulsive force between iron clusters and the solvent is the driving force to arose the aggregation of iron atoms, and that the adsorption of stabilizers on the iron nanoparticles could prevent the growth of nanoparticles. The box size and time scale in the simulation space were further investigated. The DPD simulation results of iron nanoparticle, hexadecane and stabilizers system agreed well with our experiment data.|
|We have utilized coarse-grained molecular dynamics to investigate the controlled self-assembly of small, narrowly distributed C60 fullerene clusters via grafting of a single poly(ethylene oxide) (PEO) chain. We investigate the effect of both architecture (linear or star) and molecular weight in controlling the ability to promote the stabilization of small, stable fullerene clusters which resemble an inverted micelle phase, with the fullerene acting to form the micelle core. By using molecular weight and architecture as independent control variables, we demonstrate the ability to form clusters of varying size distributions and shape. We find that the tethered nanoparticles behave similarly to self-assembling lipid systems, with the particulate nature of the nanoparticle core causing quantitative variations in the observed behavior due to cluster packing constraints.|
|Gel systems based on self-assembled blend of amphiphilic ABA and AB block copolymers form the stable, spatially extended networks with a tunable viscoelastic behavior. The viscoelastic properties and morphology have been calculated employing a non-equilibrium oscillatory shear technique used with dissipative particle dynamics method (DPD), where the repulsion parameters were chosen according to the Flory-Huggins theory of polymer interactions. We have observed that addition of AB diblock copolymer increases relative number of bridgelike chains in the copolymer network with comparison of the pure ABA triblock. The addition of AB diblock also increases the micelle size for the low copolymer concentration and does not have significant effect on the micellar size for the higher concentrations. We have demonstrated that our simulation results are in good qualitative agreement with the experimental data.|
|The high-frequency (GHz) mobility of charges on isolated conjugated polymers can now be measured in solution, providing detailed information on the intrinsic mobility of organic materials. Most current calculations of this mobility are based on propagation of the time-dependent Schroedinger equation on a disordered chain. Here, we assume instead that the wavepacket dephases rapidly in solution, and that the mobility reflects the tendency of a charge to self-localize on the chain and planarize the region upon which it is localized. Our model treats the polymer as a linear chain of sites with electronic couplings that vary with torsional angle, with the solvent included via Brownian dynamics. The parameters that determine the randomized force applied to the torsional angles are directly related to the rotational diffusion time of a single phenyl ring in solution. The results therefore provide an estimate for the polaron mobility as a function of rotational diffusion time.|
|Interactions of Li+ on the external and internal surfaces of defect-free and Stone-Wales defective (6,6) armchair single-walled carbon nanotubes have been investigated using density functional theory. Comparisons of the structures and interaction energies were made between (6,6) SWNT and graphene sheet in order to examine the effect of curvature on Li+ binding. The results indicate that the internal surface of nanotube has slightly stronger preference for Li+ adsorption than the external surface in both defect-free and Stone-Wales defective tubes with few exceptions at the defect region. Binding of Li+ affects the band gaps of nanotube as well as graphene sheet. The endohedral complexes possess higher values of HOMO-LUMO gap than exohedral complexes for both defect-free and defective tubes. Substantial electron charge transfer takes place from nanotube to Li+ ion. The present study reveals that the diffusion of Li+ inside the nanotube can take place more easily than outside the tube.|
Further advancement of CNT-based nanoelectronics is impeded by constructing precisely-controlled interconnections. A central issue is how to achieve well-defined molecular interactions among the building blocks, which is of paramount importance to the molecular assembly and stability of future devices. As a counterpart to CNTs, metal nanowires have shown potential for microelectronic applications. The thinnest nanowires, i.e., monatomic chains, of several transition metals (TMs), including gold, platinum, and silver, have already been experimentally produced and observed by high-resolution transmission electron microscopy. Here, we report the first theoretical evidence for the molecular architecture of TM-string supported on boron-doped single-walled CNTs (B-SWCNTs), exhibiting high stability and unexpected electronic properties. The B-SWCNTs-templated TM strings demonstrate strong molecular recognition, leading to the self-assembly of TM atoms, with well-defined covalent bonds. The TM strings studied here include Au, Pt, Ru, Pd, Ag, Co, Ni, Cu, W, and Ti, which are well-known for their technical importance to nanoelectronics and nanocatalysis.
Chemical adsorption of hydrogen atoms on graphite surfaces has attracted considerable interest due to its relevance for a broad range of areas including plasma/fusion physics, interstellar chemistry, and hydrogen storage. Remarkably, a rigorous benchmark of chemisorption barrier heights and potential wells predicted by widely applied density functionals such as GGA or B3LYP has not yet been reported. Obviously, molecular size represents a problem when attempting to compare DFT energetics to highly accurate ab initio levels of theory. Pyrene C16H10 and coronene C24H12 represent probably the smallest suitable compounds to model H attack on the graphite (0001) plane. Here, we show that the size effect is nearly negligible due to the surprisingly local character of the overall H-C interaction.
Our study presents counterpoise-corrected UGGA, UB3LYP, and ROMP2, ROCCSD, and ROCCSD(T) potential energy curves (PECs) based on relaxed-scan UB3LYP/cc-pVDZ geometries for the approach of atomic hydrogen head-on to one of the carbon atoms of the central carbon hexagon (site A), the midpoint of two neighbored central carbon atoms (site B), and the midpoint of a central hexagon (site C). Site A attack leads to the only global potential energy minimum corresponding to chemisorbed H (relative energy for CCSD(T) around -0.4 eV), and a barrier (CCSD(T): 0.5 eV) for the H approach. For site B attack, we found the existence of a shoulder in case of coronene + H, and a purely repulsive wall for pyrene + H. Site C is purely repulsive. Interestingly, ROCCSD(T)//UB3LYP PECs are close to that of straightforward UB3LYP, while commonly employed UGGA is much too attractive and does not possess a barrier for the H attack.
|This talk will cover my personal experience graduating and starting a non traditional chemistry career. It is important to note that it was not a passion for business, writing, law, or some other field that motivated this decision, but only an abiding wish to leave the laboratory forever. If you are in a similar situation this talk will offer encouragement that you can use your degree and what you learned in and out of the lab to have a rewarding career.|
|As chemists and engineers from many disciplines strive to develop new and beneficial products, the need for environmental protection also evolves. Until the late 1970's strategies to provide protection primarily involved determining the extent of toxic chemical distribution. Unfortunately, persistent pesticides and industrial chemicals occurred ubiquitously across the US and much of the industrialized world. An entire new field of science, environmental toxicology, emerged to answer questions about the actual harm that might be caused by these pollutants. At that time the discipline sat at the intersection of toxicology, environmental chemistry, and ecology. The field flourished, because environmental chemistry was still an emerging field, a myriad of toxicants needed to be evaluated in a seemingly endless number of wildlife species, and regulations needed to be developed of modified to incorporate the findings from these efforts. From the outset environmental and analytical chemistry played a critical role in the success of environmental toxicology. Immense efforts were undertaken to determine chemical occurrence, persistence, and transformation in the environment. Parallel studies were implemented to determine which of these chemicals and their transformation products were toxic, as well as which species were sensitive to the identified toxicants. Understanding chemical fate was necessary to determine where toxicants occurred and were likely to occur in as yet unmonitored scenarios, which allowed appropriate study species to be selected for toxicological characterization and ecological evaluation before field studies began. This presentation will discuss the application of environmental and analytical chemistry to field studies that have helped shape the US regulatory environment for hazardous chemicals. Case studies will include evaluation of pesticides, wastes from mining operations, non-lethal monitoring techniques, and nanomaterials.|
|A career in patent law provides an outstanding opportunity to use your technical background to protect and defend cutting-edge intellectual property rights. This presentation will provide a brief background of U.S. patent law and explore various opportunities for chemists to pursue non-traditional careers in patent law, including patent prosecution, patent litigation, and licensing. The importance of pursuing an appropriate scientific degree, choosing the right law school, and taking the patent bar examination also will be discussed.|
|Many career opportunities for chemists can be found beyond the laboratory and the classroom. This talk will focus on careers in science policy in the government and nonprofit sectors. Examples will be drawn from the speaker's personal experience and will also highlight the enhanced career opportunities that can be provided by the ACS Public Policy Fellowships. The talk will also provide guidance on what background is needed to pursue these nontraditional career paths.|
|It's not the career usually envisioned when embarking on a graduate program in chemistry, but scholarly publishing is a field with plenty of challenges as well as opportunities for leveraging that chemistry degree. The last decade has been one of many changes in the publishing industry. One thing that hasn't changed is the goal of delivering the best scientific content to scientists, while anticipating the technology advances that will meet the publication needs of tomorrow's chemists.|
|My talk will focus on how a chemistry degree provides a solid basis for careers in scholarly publishing, communications, and journalism.|
|There are newer evolving paradigms for undertaking research and development within pharmaceutical and chemical industries. This includes a trend towards multi-disciplinary environments, virtual teams and working with partners across the globe. The latter is an area of particular growth, from identifying strategic partner(s) to managing those relationships and alliances. Often new skills in global project management are required beyond the traditional technical training received in undergraduate studies (and perhaps graduate). Career transition may require identifying gaps in your skill sets and re-training, such as on-the-job or even graduate certification to degrees. The opportunity to purse non-scientific tracks as a scientist is being becoming more common within organizations, especially in an effort to retain talent. Thus, career development and progression is not always a linear track. Here I will be briefly reference a few alternative careers paths, including project and alliance management and, technology outsourcing.|
|New graduates in chemistry and related fields often overlook opportunities in small business as a career option. A watershed in hiring practices for chemistry graduates occurred in 2002 when small business hired more chemists than big business. This transition was the culmination of a greater than 12 year trend in which small business hires increased from 28% in 1990 to 52% in 2002. This trend continues. The expectations of new chemistry graduates must reflect this dramatic change in their methods of job search, job expectations, and career preparation. Suggestions for funding, opportunities for early business start-ups while still in school, and challenges for job seekers will be discussed. Vercellotti's small business career at V-LABS, INC. will be explored|