Aug
12
Mon
2013
Plenary Talk: Realistic modeling-new insight into the functions of the cerebellar network @ Amriteshwari Hall
Aug 12 @ 1:37 pm – 2:24 pm

egidioEgidio D’Angelo, MD, Ph.D.
Full Professor of Physiology & Director, Brain Connectivity Center, University of Pavia, Italy


Realistic modeling: new insight into the functions of the cerebellar network

Realistic modeling is an approach based on the careful reconstruction of neurons synapses starting from biological details at the molecular and cellular level. This technique, combined with the connection topologies derived from histological measurements, allows the reconstruction of precise neuronal networks. Finally, the advent of specific software platforms (PYTHON-NEURON) and of super-computers allows large-scale network simulation to be performed in reasonable time. This approach inverts the logics of older theoretical models, which anticipated an intuition on how the network might work.  In realistic modeling, network properties “emerge” from the numerous biological properties embedded into the model.

This approach is illustrated here through an outstanding application of realistic modeling to the cerebellar cortex network. The neurons (over 105) are reproduced at a high level of detail generating non-linear network effects like population oscillations and resonance, phase-reset, bursting, rebounds, short-term and long-term plasticity, spatiotemporal redistrbution of input patterns. The model is currently being used in the context of he HUMAN BRAIN PROJECT to investigate the cerebellar network function.

Correspondence should be addressed to

Dr. EgidioD’Angelo,
Laboratory of Neurophysiology
Via Forlanini 6, 27100 Pavia, Italy
Phone: 0039 (0) 382 987606
Fax: 0039 (0) 382 987527
dangelo@unipv.it

Acknowledgments

This work was supported by grants from European Union to ED (CEREBNET FP7-ITN238686, REALNET FP7-ICT270434) and by grants from the Italian Ministry of Health to ED (RF-2009-1475845).

Egidio

Aug
13
Tue
2013
Invited Talk: Nanoscale Simulations – Tackling Form and Formulation Challenges in Drug Development and Drug Delivery @ Sathyam Hall
Aug 13 @ 2:15 pm – 2:40 pm

lalithaLalitha Subramanian, Ph.D.
Chief Scientific Officer & VP, Services at Scienomics, USA


Nanoscale Simulations – Tackling Form and Formulation Challenges in Drug Development and Drug Delivery

Lalitha Subramanian, Dora Spyriouni, Andreas Bick, Sabine Schweizer, and Xenophon Krokidis Scienomics

The discovery of a compound which is potent in activity against a target is a major milestone in Pharmaceutical and Biotech industry. However, a potent compound is only effective as a therapeutic agent when it can be administered such that the optimal quantity is transported to the site of action at an optimal rate. The active pharmaceutical ingredient (API) has to be tested for its physicochemical properties before the appropriate dosage form and formulation can be designed. Some of the commonly evaluated parameters are crystal forms and polymorphs, solubility, dissolution behavior, stability, partition coefficient, water sorption behavior, surface properties, particle size and shape, etc. Pharmaceutical development teams face the challenge of quickly and efficiently determining a number of properties with small quantities of the expensive candidate compounds. Recently the trend has been to screen these properties as early as possible and often the candidate compounds are not available in sufficient quantities. Increasingly, these teams are leveraging nanoscale simulations similar to those employed by drug discovery teams for several decades. Nanoscale simulations are used to predict the behavior using very little experimental data and only if this is promising further experiments are done. Another aspect where nanoscale simulations are being used in drug development and drug delivery is to get insights into the behavior of the system so that process failures can be remediated and formulation performance can be improved. Thus, the predictive screening and the in-depth understanding leads to experimental efficiency resulting in far-reaching business impacts.

With specific examples, this talk will focus on the different types of nanoscale simulations used to predict properties of the API in excipients and also provide insight into system behavior as a function of shelf life, temperature, mechanical stress, etc.

Invited Talk: Applying Machine learning for Automated Identification of Patient Cohorts @ Sathyam Hall
Aug 13 @ 2:40 pm – 3:05 pm

SriSairamSrisairam Achuthan, Ph.D.
Senior Scientific Programmer, Research Informatics Division, Department of Information Sciences, City of Hope, CA, USA


Applying Machine learning for Automated Identification of Patient Cohorts

Srisairam Achuthan, Mike Chang, Ajay Shah, Joyce Niland

Patient cohorts for a clinical study are typically identified based on specific selection criteria. In most cases considerable time and effort are spent in finding the most relevant criteria that could potentially lead to a successful study. For complex diseases, this process can be more difficult and error prone since relevant features may not be easily identifiable. Additionally, the information captured in clinical notes is in non-coded text format. Our goal is to discover patterns within the coded and non-coded fields and thereby reveal complex relationships between clinical characteristics across different patients that would be difficult to accomplish manually. Towards this, we have applied machine learning techniques such as artificial neural networks and decision trees to determine patients sharing similar characteristics from available medical records. For this proof of concept study, we used coded and non-coded (i.e., clinical notes) patient data from a clinical database. Coded clinical information such as diagnoses, labs, medications and demographics recorded within the database were pooled together with non-coded information from clinical notes including, smoking status, life style (active / inactive) status derived from clinical notes. The non-coded textual information was identified and interpreted using a Natural Language Processing (NLP) tool I2E from Linguamatics.