William H. Dow PhD
https://publichealth.berkeley.edu/people/william-dow/
We assured comparability of the output formats between the two compared algorithms symptoms 5 days before your missed period leflunomide 10 mg buy low price. As the species tree was not always completely resolved in our case medications 6 rights purchase leflunomide without prescription, some trivial transfers medications qd order generic leflunomide from india. For quality reasons section 8 medications cheap leflunomide, we discarded such trivial transfers in this simulation study symptoms kidney infection leflunomide 10 mg order fast delivery. The abscissa represents the sensitivity percentage and ordinate the tested function. We compared the performances of these functions to those yielded by a simple distance measure Q9. In the case of artificial data, the functions Q7, Q8a, and Q8b provided better performances than the function Q9 only when recombination was considered. Thus, the following general trend can be observed: the higher number of transfers we have the lower detection rates are. Higher degrees of recombination also lead to a lower detection rate for all the functions, but favor the functions Q8a and Q8b, as their performance degrades less, especially in the middle range. The function Q7, which showed very good performances for the real-life prokaryotic data, does not outperform the function Q9 in this particular testing framework. It is important to note that even without recombination, the functions Q8a and Q8b can be also used as they yield almost the same detection rates as the function Q9. The main differences can be observed in the tail of the distribution, for the lower 25% quartile, as the median and high quartile are already at the same maximum value (of 100%). The impact of horizontal transfers on the evolution of many bacteria and viruses, as well as the cumulative effect of recombination over multiple generations, remains to be investigated in greater detail. The discussed method also benefits from a Monte Carlo p-value validation procedure, obviously at the cost of the associated validation constant needed for maintaining precision. Because of its low time complexity, the new algorithm can be used in complex phylogenetic and genomic studies involving thousands of species. The new variability clustering functions Q7, Q8a, Q8b, and Q9 were introduced and tested in our simulations. Detecting genomic regions associated with a disease using variability functions and adjusted rand index. Identification of specific genomic regions responsible for the invasivity of Neisseria Meningitidis. Inferring and validating horizontal gene transfer events using bipartition dissimilarity. Selection of conserved blocks from multiple alignments for their use in phylogenetic analysis. Horizontal gene transfer from diverse bacteria to an insect genome enables a tripartite nested mealybug symbiosis. Gene transfer from bacteria and archaea facilitated evolution of an extremophilic eukaryote. We smoothly introduce the reader to various notions such as different types of biological interaction networks and fundamental measures and metrics used in most methods of biological interaction networks. Networks are basically represented by the mathematical notion of the underlying graph. There are different Pattern Recognition in Computational Molecular Biology: Techniques and Approaches, First Edition. The distribution of degrees across all nodes usually is used in order to model the number of edges the 26. On the basis of this distribution, a network can be distinguished as (i) a scale-free network (where degree distribution follows a power law distribution), (ii) a broad-scale network (where the degree distribution follows a power law distribution that has a sharp cut off at its tail), and (iii) single-scale network (where the degree distribution is decaying fast). The degree correlation measures the expected degree of the neighbors of a node and it is connected with the notion of assortativity. As regards shortest paths, these networks usually use the simple geodesic distance (the length of the shortest path), the average shortest path, the diameter of the network, which is defined as the maximum distance between two nodes. The clustering coefficient measures the density of the edges in the neighbors of a node and it is defined as the number of edges among the neighbors divided by the total number of edges among the neighbors. The network average clustering coefficient is the average of the local clustering coefficients. The clustering coefficient can be used to characterize a network as small if the average clustering coefficient is significantly higher than a random graph constructed on the same vertex set, and if the graph has approximately the same mean-shortest path length as the corresponding random graph. Related notions are the global clustering coefficient that is based on triplets of nodes, and the transitivity ratios that mainly give higher weights higher degree nodes. Assortativity depicts the tendency of nodes to attach to others that are similar in some way, and is defined as the Pearson correlation coefficient [64] of degree between pairs of linked nodes. Positive values of the metric indicate that the network depicts correlation between nodes of high degree, while negative values indicate a higher dependencies between nodes of different degrees. Biological networks generally depict negative values, while social and technological networks generally have high positive values of assortativity. Reciprocity is defined as the ratio of the number of links pointing in both directions in a network to the total number of links; when its value is equal to 1 we have a completely bidirectional network, while for a completely unidirectional network, it is equal to 0. Moreover, in Reference [26], a new measure of reciprocity is proposed that defines it as the correlation coefficient between the entries of the adjacency matrix of a real network. As the authors of the publication point out, their findings show that, using this metric, real networks are either negatively or positively correlated, while networks of the same type (ecological, social, etc. The various notions of centrality are of particular interest: · the degree centrality characterizes the compactness of link presence in various nodes and is essentially equal to the degree of the node (in-degree and out-degree). It is essentially the inverse of the farness, which is defined as sum of distances of a node to the other nodes. The degree centrality is only based on the number of neighbors; however, the eigenvector centrality can consider the centralities of neighbors. In particular, modularity can be used in many techniques that optimize modularity directly including greedy techniques, simulated annealing, external optimization, and spectral optimization. The modularity of a partition is a scalar value between -1 and 1 that measures the density of links inside communities as compared to links between communities (see also Reference [12]). The measures defined above cannot be used as metrics because they do not uniquely identify the structure of a graph. In order to fill that gap, a new metric was proposed [9], Chapter 6 that could be used to uniquely identify graph structures. They represent a model that was proposed by Erdos and Renyi, which is quite popular as it is the main theoretical vehicle in the study of the practical and theoretical properties of real graphs. According to the random graph model, a network is modeled as a set of n vertices with edges appearing between each pair of them with probability equal to p. As remarked in Reference [32], the random graph model basically studies ensembles of graphs; an interesting aspect of its properties is the existence of a giant component. Despite their popularity, random networks fail to capture the behavior of scale-free networks that are basically governed by power law distributions. These graphs appear in bioinformatics and related applications and they can be handled by a variety of models, the most popular being the BarabasiAlbert model. This model is based on the assumption that a network evolves with the addition of new vertices and that the new vertices are connected to the previous vertices according to their degree. The authors initially present a simple unified model that evolves by using simple rules: (i) it begins with small-sized complete graph. The fitness is related to studies based on competition dynamics and weighted networks. The authors also present two specific models, a metabolic network model and bipartite network model that is useful in modeling real bipartite relationships. Gene microarrays contain the expression of thousands of genes, while gene sets contain data describing common characteristics for a set of genes. The types of networks that can be inferred [9], Chapter 1 are Gaussian graphical models, Boolean networks, probabilistic boolean networks, differential equation-based networks, mutual information networks, collaborative graph model, frequency method, network inference, and so on. In graph theoretical models, the networks are represented by a graph structure where the nodes represent the various biological elements and the edges represent the relationships between them. When reconstructing graph models, the process entails the identification of genes and their relationships. Special cases of networks that are worth the effort to deal with are co-expression networks and the collaborative graph model. In co-expression networks, edges designate the co-expression of various genes, while in the collaborative graph model [44] a weight matrix is employed that uses weighted counts in order to estimate and qualitatively designate the connections between the genes. A boolean network consists of a set of variables (that are mapped to genes) and a set of boolean functions on these variables that are mapped to the variables. Each gene can be on (1) or off (0), and the boolean network evolves with time, as the values of the variables at time t designates the values at time t + 1. Boolean networks were firstly introduced by Kaufman [39, 40] and have found a maximum applications in molecular biology. Probabilistic boolean networks are an extension of boolean networks; they are equipped with probabilities and are able to model uncertainties by utilizing the probabilistic setting of Markov chains. Another graph model that is used when formulating biological problems is Bayesian networks, a formalism that has been fruitfully exploited [37] in Computer Science. Bayesian networks are probabilistic graphical models that can model evidence and with the chain rule calculate probabilities for the handling of various events. More typically, a Bayesian network consists of a set of variables, a set of directed edges that form a Directed Acyclic Graph, and to each variable a table of conditional probabilities obtained from its parents. Bayesian networks have been employed in a variety of biological applications, the most prominent being the application presented in References [1, 10, 38, 54, 57, 63, 71, 94, 97]. The last model (differential equation models) is based in the employment of a set of differential equations that are elegantly defined in order to model complex dynamics between gene expressions. In particular, these equations express the rates of changes of gene expressions as a function of expressions of other genes and they can incorporate many other parameters. The main representative of these models are Linear Additive Models where the corresponding functions are linear; details about these models can be found in Reference [87], Chapter 27. The problem is related to social algorithms, web searching algorithms, and bibliometrics; for a thorough review of this area, one should go through References [16, 24, 25, 46, 59, 74]. In particular, with regard to community detection, various algorithms have been proposed in the literature. A breakthrough in the area is the algorithm proposed in Reference [27], for identifying the edges lying between communities and their successive removal, a procedure that after some iterations leads to the isolation of the communities [27]. The majority of the algorithms proposed in the area are based on spectral partitioning techniques, that is, techniques that partition objects by using the eigenvectors of matrices that are form in the specific set [41, 60, 77, 78]. One should also mention techniques that are using modularity, a metric that designates the density of links inside communities against the density outside communities [24, 58] with the most popular being the algorithm proposed in Reference [12]. The algorithm is based on a bottom-up approach where initially all nodes belong to a distinct community and at each stage we examine all the neighbors j of a node i and change the i community of nodes by examining the gain of modularity we could attain by putting i in the j community. From all the neighbors, we choose the one that depicts the greater gain in modularity and if the modularity is positive, we perform the exchange. This stage of the algorithm stops when no gain of modularity can be further achieved, that is, a local maxima has been attained. In the second phase, a new network is formed with nodes from the aforementioned communities, with edge weights the sum of the weight of the links between nodes in the corresponding two communities. The process is reapplied, producing a hierarchy and the height of the hierarchy that is constructed is determined by the number of passes and is generally a small number. Besides finding emerging communities, estimating authorities has also attracted attention. When a node (considered as valuable and informative) is usually pointed to by a large number of hyperlinks (so it has large in-degree), it is called an authority. A node that points to many authority nodes is itself a useful resource and is called a hub. Both metrics have been improved in various forms and a review of this can be found in Reference [45]. Some proteins interact for a long period of time, forming units called protein complexes, whereas other proteins interact for a short time, when they spatially contact each other. There is a vast amount of existing proteins that vary in different species, so the number of possible interactions is significantly big. Many verified protein interactions are stored in repositories and are used as knowledge base for predicting protein interactions. Each one of these databases focuses on a different kind of interactions either by targeting small-scale or large-scale detection experiments, by targeting different organisms, or humans only, and so on. All of them usually offer downloading of their data sets or custom web graphical user interfaces for retrieving information. The common representation of these networks is by using graphs where at each node and edge there is a large number of attributes expressing properties of proteins (nodes) and of interactions (edges), respectively. Managing and mining large graphs introduce significant computational and storage challenges [75]. Moreover, most of the databases are maintained and updated in different sites on web. Most of them contain interactions experimentally detected in lab, while other databases contain predicted protein interactions. Curators of the databases review the published articles and according to various criteria. Data are entered manually and it offers the user the ability to make queries through a web-based interface or download a subset of data in various formats. The user can make queries through a web-based interface and results are depicted in an interactive table. Protein information annotations are curated by expert biologists using published literature.
The cell lines differ in the presence of carrier-mediated transport systems and metabolic activity symptoms questionnaire leflunomide 10 mg discount, and thus the outcome differs medicine 0025-7974 buy 20 mg leflunomide with amex. For the design of more specific screening tools symptoms zenkers diverticulum leflunomide 20 mg overnight delivery, for example treatment definition cheap leflunomide 10 mg buy line, transporters symptoms kidney failure dogs cheap 20 mg leflunomide with visa, specific cells of a certain age and differentiation are used (Doppenschmitt et al. Estimation of partitioning parameters of non-ionic surfactants using calculated descriptors of molecular size, polarity, and hydrogen bonding. Effect of coadministration of aluminium and magnesium hydroxides on absorption of anticoagulants in man. Estimating human oral fraction dose absorbed: a correlation using rat intestinal membrane permeability for passive and carrier-mediated compounds. Theoretical considerations in the correlation of in vitro drug ¨ product dissolution and in vivo bioavailability: a basis for biopharmaceutical drug classification. P-glycoprotein (P-gp) mediated efflux in Caco-2 cell monolayers: the influence of culturing conditions and drug exposure on P-gp expression levels. Measurement of steroid hydroxylation reactions by high performance liquid chromatography as indicator of P450 identity and function. Epithelial transport of drugs in cell culture I: a model for studying the passive diffusion of drugs over intestinal epithelia. Correlation between oral drug absorption in humans and apparent drug permeability coefficients in human intestinal epithelial (Caco-2) cells. Caco-2 and emerging alternatives for prediction of intestinal drug transport: a general overview. Selective paracellular permeability in two models of intestinal ¨ absorption: cultured monolayers of human intestinal epithelial cells and rat intestinal segments. Drug absorption studies of prodrugs esters using the Caco-2 model: evaluation of ester hydrolysis and transepithelial transport. Drug liposome partitioning as a tool for the prediction of human ¨ passive intestinal absorption. Immobilized liposome and biomembrane partitioning ¨ chromatography of drugs for prediction of drug transport. Differential in vivo and in vitro intestinal permeability to lactulose and mannitol in animals and humans: a hypothesis. Reevaluation of the absorption of carbenoxolone using an in situ rat intestinal technique. Rational delivery strategies to circumvent physical and metabolic barriers to the oral absorption of peptides. The influence of the serosal layer on viability and permeability of rat ¨ intestinal segments in the Ussing chamber. The relationship between peptide structure and transport across epithelial cell monolayers. Physico-chemical and physiological mechanisms for the effects of food on drug absorption: the role of lipids and pH. The validation of the intestinal permeability approach to predict oral fraction of dose absorbed in humans and rats. Lipid microemulsions for improving drug dissolution and oral absorption: physical and biopharmaceutical aspects. Molecular fields in quantitative structure-permeation relationships: the Volsurf approach. Physicochemical interactions affecting drug in the gastrointestinal tract: a review. A human colonic cell line sharing similarities with enterocytes as a model to examine oral absorption: advantages and limitations of the Caco-2 model. Radioligand binding assay employing ¨ P-glycoprotein-overexpressing cells: testing drug affinities to the secretory intestinal multidrug transporter. Dissolution testing as a prognostic tool for oral drug absorption: immediate release dosage forms. Three-dimensional quantitative structure-permeability relationship analysis for a series of inhibitors of rhinovirus replication. Geneva, Switzerland: European Agency for the Evaluation of Medicinal Products, 1998. The correlation between rat and human small intestinal ¨ permeability to drugs with different physico-chemical properties. Ph-profile and regional transit times of the normal gut measured by radiotelemetry device. Guidance for Industry-Waiver of In Vivo Bioavailability and Bioequivalence Studies for Immediate Release Solid Dosage Forms Based on a Biopharmaceutics Classification System. Automated absorption assessment using Caco-2 cells cultivated on both sides of polycarbonate membranes. A computational method procedure for determining energetically favourable binding sites on biologically important macromolecules. In vitro measurement of gastrointestinal tissue permeability using a new diffusion cell. Effects of vehicles for sparingly soluble compounds on the drug absorption in vivo. Elderly People & Medicines, Stockholm Conference Center, Alvsjo, October 1113, 1999; (abstr). Can mucosal damage be minimised during permeability measurements of sparingly soluble compounds Fourth International Conference on Drug Absorption: Towards Prediction and Enhancement of Drug Absorption, Edinburgh, 1998. Comparison of drug transporter gene expression and functionality in Caco-2 cells from 10 different laboratories. Structural requirements for interaction with the oligopeptide transporter in Caco-2 cells. Characterization of the human colon carcinoma cell line (Caco-2) as a model for intestinal epithelial permeability. Predicting drug absorption from molecular surface properties based on molecular dynamics simulations. Intestinal secretion of drugs: the role of P-glycoprotein and related drug efflux systems in limiting oral drug absorption. Influence of physicochemical properties on dissolution of drugs in the ¨ gastrointestinal tract. Techniques for microfloral and associated metabolic studies in relation to the absorption and enterohepatic circulation of drugs. Identification of esterases expressed in Caco-2 cells and effects of their hydrolyzing activity in predicting human intestinal absorption. Selection of solvent system for membrane, cell and tissue based permeability assessment. Mechanisms of intestinal absorption of the antibiotic, fosfomycin, in brush-border membrane vesicles in rabbits and humans. Permeability characteristics of various intestinal regions of rabbit, dog and monkey. A method for the determination of cellular permeability coefficients and aqueous boundary layer thickness in monolayers of intestinal epithelial (Caco-2) cells grown in permeable filter chambers. Effect of an oral rehydration solution on paracellular drug transport in intestinal epithelial cells and tissues: assessment of charge and tissues selectivity. Paracellular drug transport across intestinal epithelia: influence of charge and induced water flux. A modified procedure for the rapid preparation of efficiently transporting vesicles from small intestinal brush border membranes. A correlation between the permeability characteristics of a series of peptides using an in vitro cell culture model (Caco-2) and those using an in situ perfused rat ileum model of the intestinal mucosa. Mechanisms and sites of mannitol permeability of small and large intestine in the rat. Human jejunal effective permeability and its correlation with preclinical drug absorption ¨ models. Regional jejunal perfusion: a new in vivo approach to study ¨ oral drug absorption in man. Jejunal permeability: a comparison between the Ussing chamber ¨ technique and the single-pass perfusion in humans. Comparison between active and passive drug transport in human ¨ intestinal epithelial (Caco-2) cells in vitro and human jejunum in vivo. Use of everted intestinal rings for in vitro examination of oral absorption potential. Histological reevaluation of everted gut technique for studying intestinal absorption. Characterisation of fluids from the stomach and proximal jejunum in men and women. Mechanism of absorption enhancement in humans after ¨ rectal administration of ampicillin in suppositories containing sodium caprate. Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings. Immobilized liposome chromatography of drugs for model analysis of drugmembrane interactions. Characterization of the regional intestinal kinetics of drug efflux in rat and human intestine and in Caco-2 cells. Part 4: comparison of the in vitro mucus model with absorption models in vivo and in situ to predict intestinal absorption. Biophysical models as an approach to study passive absorption in drug development: 6-fluoroquinolones. Segmental differences in drug permeability, esterase activity and ketone reductase activity in the albino rabbit intestine. Studies of some characteristics of molecular dissolution kinetics from rotating discs. The contribution of solvent drag to the intestinal absorption of the basic drugs amidopyridine and antipyrine from the jejunum of the rat. Chemical information management in drug discovery: optimizing the computational and combinatorial chemistry interfaces. Biopharmaceutical Support in Candidate Drug Selection 169 ¨ Ornskov E, Gottfries J, Erickson M, et al. Correlation of drug absorption with migration data from capillary electrophoresis using micellar electrolytes. Brush border membrane vesicles, isolated mucosal cells and everted intestinal rings: characterization and salicylate accumulation. Link between drug absorption solubility and permeability measurements in Caco2 cells. Polar molecular surface properties predict the intestinal absorption of drugs in humans. Bidirectional small-intestinal permeability in the rat to common marker molecules in vitro. Regional small-intestinal permeability in vitro to different sized ¨ dextrans and proteins in the rat. Evaluation of viability of excised rat intestinal segments in the ¨ Ussing chamber: investigation of morphology, electrical parameters and permeability characteristics. Comparison of the permeability characteristics of a human colonic epithelial (Caco-2) cell line to colon of rabbit, monkey and dog intestine and human drug absorption. Possible involvement of multiple P-glycoprotein mediated efflux systems in the transport of verapamil and other organic cations across rat intestine. Molecular mechanism for the relative binding affinity to the intestinal peptide carrier. Effect of medium-chain glyceride base on the intestinal absorption of cefmetazole sodium in rats and dogs. Inhibition of binding of an enzymatically stable thrombin inhibitor to ¨ ¨ luminal proteases as an additional mechanism of intestinal absorption enhancement. Discrimination between drug candidates using models for evaluation of intestinal absorption. Comparison of intestinal permeabilities determined in multiple in vitro and in situ models: relationship to absorption in humans. Absorption enhancement of melanotan-1: comparison of the Caco-2 and rat in situ models. Characterization of drug transport through tight-junctional pathway in Caco-2 monolayer: comparison with isolated rat jejunum and colon. Use of 1-methyl-pyrrolidone as a solubilising agent for determining the uptake of poorly soluble drugs. The effect of surgical bowel manipulation and anaesthesia on intestinal glucose absorption in rats. Active transport of 3-O-methyl-glucose by the small intestine in chronically catheterised rats. An overview of Caco-2 and alternatives for prediction of intestinal drug transport and absorption. Drug Bioavailability, Estimation of Solubility, Permeability, Absorption and Bioavailability. Effects of enzymatic inhibition and increased paracellular shunting on transport of vasopressin analogues in the rat. Membrane transport of drugs in different regions of the intestinal tract of the rat. Active transport of sodium as the source of electric current in the short-circuited isolated frog skin.
With this type of calorimeter treatment 4 high blood pressure generic leflunomide 10 mg buy, if an exothermic or endothermic event occurs when a sample is heated medications kidney infection cheap leflunomide 20 mg line, the power added or subtracted to one or both of the furnaces to compensate for the energy change occurring in the sample is measured treatment 3rd degree av block 10 mg leflunomide purchase with visa. Thus symptoms 4 days after ovulation purchase leflunomide with visa, the system is maintained in a thermally neutral position at all times symptoms 6dp5dt generic 10 mg leflunomide with amex, and the amount of power required to maintain the system at equilibrium is directly proportional to the energy changes that are occurring in the sample. In both instruments, a few milligrams of the compound under study are weighed into an aluminum pan that can be open, hermetically sealed, or pierced to allow the escape of water, solvent, or decomposition products from pyrolysis reactions. The Tzero accounts for the differences between the two types of instruments such that the sensitivity and resolution is improved on a very flat baseline, which can be a disadvantage of the power-compensated machines. In addition to these advantages, direct measurement of heat capacity can be obtained. These include the type of pan, heating rate, the nature and mass of the compound, the particle size distribution, packing and porosity, pretreatment, and dilution of the sample. Phenomena that can be detected using this technique include melting (endothermic), solid-state transitions (endothermic), glass transitions, crystallization (endothermic), decomposition (exothermic), and dehydration or desolvation (endothermic). A heating rate of 108C/min is a useful compromise between speed of analysis and detecting any heating ratedependent phenomena. If any heating ratedependent phenomena are evident, experiments should be repeated varying the heating rate to attempt to identify the nature of the transition(s). These may be related to polymorphism, discussed earlier in this chapter, or to particle size. At 108C/min the sample showed a single endotherm; however, when the sample was milled, it gave a thermogram that showed a melt-recrystallization-melt transformation. By reducing the heating rate it can be seen that rather than being due to a polymorphic transformation induced by the milling process, the transformation was due to a reduction in particle size. For example, for a melting endotherm the onset, peak temperatures, and enthalpy of fusion can be derived. The onset temperature is obtained by extrapolation from the leading edge of the endotherm to the baseline. The peak temperature is the temperature corresponding to the maximum of the endotherm, and enthalpy of fusion is derived from the area of the thermogram. It is the accepted custom that the extrapolated onset temperature is taken as the melting point; however, some users report the peak temperature in this respect. Recycling experiments can also be conducted whereby a sample is heated and then cooled. The thermogram may show a crystallization exotherm for the sample, which on subsequent reheating may show a different melting point to the first run. In a similar way amorphous forms can be produced by cooling the molten sample to form a glass. This thermogram was obtained after the methanol solvate was desolvated in an oven. An exotherm corresponding to crystallization was noted at ~1258C, which indicated that desolvation produced an amorphous form which on heating crystallized. The first of which was a solid-state transition, that is, a transformation without a melt. The small amount of compound required is attractive-especially when it is in short supply during prenomination studies. This work experimentally confirmed the topological P/T diagrams used by Espeau et al. In this respect ultrapure indium and lead traceable standards are probably the most convenient for a two-point calibration. One advantage of increased heating rate is obviously a shorter analysis time and increased throughput. This allows complex and even overlapping processes to be deconvoluted (Coleman and Craig, 1996). The cyclic heat flow part of the signal (heat capacity, Cp  heating rate) is termed the reversing heat flow component. The nonreversing part is obtained by subtracting this value from the total heat flow curve: It is important to note that all of the noise appears in the nonreversing signal. This minimizes temperature gradients and maximizes conductivity during the heating and cooling cycles. The heat capacity heat flow contribution during the heating and cooling cycles is completely reversible. There needs to be a sufficient number of cycles to cover the thermal event under investigation. Some samples may fluctuate in temperature during the sinusoidal ramp in temperature. Using this technique the glass transition could be separated from a relaxation endotherm that appeared as part of the transition. Although it is useful in this respect the measurements can be affected by such instrumental parameters as temperature cycling and modulation period. Its use in investigating glass transitions was discussed; however, this was extended to consider its use with regard to desolvation and degradation. The events were separated as melt (endothermic), which was reversible, and decomposition, which was nonreversible. After this the temperature is moved up or down to produce a set of quasi-isothermal steps. The net effect of this procedure is to eliminate the effect of heating programs and thus obtain heat capacity data. Using this technique, a sample can be scanned for not only thermal conductivity but also topography, allowing thermal analysis to be performed on specific regions of a sample. It is based on a sensitive balance that records the weight of the sample as it is heated. The first two-thirds of the water is lost relatively easily on heating and corresponds to "loosely bound" hydrogen-bonded channel water. The remaining one-third of the weight loss clearly represents water held more tightly in the structure and is the water associated with a sodium ion in the crystal lattice. In addition, the dehydration mechanism and activation of the reaction may be dependent on the particle size and sample weight (Agbada and York, 1994). If sublimation studies are being undertaken, benzoic acid has been proposed as a calibration standard (Wright et al. The hot stage consists of a sample chamber with windows that allows the light from the microscope to pass through the sample. The sample can be heated at different rates in the sample chamber, and the atmosphere can be controlled. Thermal events can be observed using a microscopy; however, it is more usual to record digital images as stills or movies. Isothermal microcalorimetry can also be used be to determine, among other things, the stability and hygroscopicity of substances (Beezer et al, 2004, Yang and Wu, 2008). When investigating hygroscopicity two ways of determining the moisture uptake can be used. This instrument utilizes a perfusion attachment with a precision flowswitching valve. In addition to examining the effect of moisture on compounds, organic vapors can also be used (Samra and Buckton, 2004). In this example, there was a transformation from a crystalline trihydrate to a heptahemihydrate. Other examples of use include the stability testing of enalapril maleate and its tablets (Simoncic et al. Hamedi and Grolier (2007) have used isothermal microcalorimetry to determine the solubility in a solvent-antisolvent system whereby the heat of dissolution of the compound under investigation is measured after the addition of a solvent. It can also be used to characterize polymorphs and related polymorphs via solution calorimetry (Urakami et al. The enthalpy of solution for the amorphous compound is an exothermic event, while that of the crystalline hydrate is endothermic. In addition, the ready solubility of the compound in aqueous media is probably governed by entropy considerations. Furthermore, assessments of the heat flow as a function of amorphous-crystalline composition ratios is based on the assumption that the dissolution kinetics of both phases was sufficiently similar (at infinite dilution), thus allowing one cumulative thermal event to manifest. Furthermore, if we can identify the degradation products of the reaction by combining this technique with mass spectroscopy, then it may also be possible to elucidate the degradation mechanism. Other considerations will include the amount of compound to be determined, from what type of matrix. For most assays, where the compound is easily detected and there are relatively high concentrations in a simple matrix, isocratic elution is usually preferred, since it is simple and no post-equilibration phase is required prior to the next analysis. However, where degradation products (products of side reactions), excipients or synthetic intermediates of differing lipophilicities are likely to be encountered a gradient elution may be used. Gradient elution offers the advantage of sharper peaks, increased sensitivity, greater peak capacity, and selectivity (increased resolving power). On the other hand, gradient elution may lead to an extended analysis time due to post-run equilibration. The type of detector to be used is usually dictated by the chemical structure of the compound under investigation. Usually, the lmax is chosen; however, to remove unwanted interference, it may be necessary to move away from this value. Where possible, the use of wavelengths less than 250 nm should be avoided because of the high level of background interference and solvent adsorption. Other types of detection include refractive index, fluorescence or mass selective detectors. The use of other types of detector, such as those based on fluorescence, can be used for assay of compounds that can be specifically detected at low concentrations in the presence of nonfluorescent species. However, since few compounds are naturally fluorescent, they require to be chemically modified, assuming they have a suitable reactive group, to give a fluorescent derivative. During the early stages of development, the amount of method validation carried out is likely to be limited due to compound availability. At the very least, a calibration curve should be obtained using either an internal standard or external standard procedure. The latter procedure is commonly employed by injecting a fixed volume of standard samples containing a range of known concentrations of the compound of interest. Plots of peak height and/or area versus concentration are checked for linearity by subjecting the data to linear regression analysis. Other tests such as the limit of detection, precision of the detector response, accuracy, reproducibility, specificity, and ruggedness may be carried out if more extensive validation is required. Typically it begins during the lead optimization phase, continues through prenomination, and on into the early phases of development. Decisions made on the information generated during this phase can have a profound effect on the subsequent development of those compounds. Therefore, it is imperative that preformulation should be performed as carefully as possible to enable rational decisions to be made. The quantity and quality of the drugs can affect the data generated as well as the equipment available and the expertise of the personnel conducting the investigations. In some companies there are specialized preformulation teams, but in others the information is generated by a number of other teams. Whichever way a company chooses to organize its preformulation information gathering, one of the most important facets is the close communication between its various departments. Special thanks also to Arvind Varsani, Alan Tatham, Will Barton, Gavin Gunn, and Dee Patel, students from Loughborough University, who have also contributed to our work. Dehydration of theophylline monohydrate powder-effects of particle size and sample weight. The molecular basis of moisture effects on the physical and chemical stability of drugs in the solid-state. The Cambridge Structural Database: a quarter of a million crystal structures and rising. High-throughput surveys of crystal form diversity of highly polymorphic pharmaceutical compounds. The influence of formulation and manufacturing process on the photostability of tablets. A theoretical basis for a Biopharmaceutic Drug Classification: ¨ the correlation of in vitro drug product dissolution and in vivo bioavailability. Predictive relationships in the water solubility of salts of a nonsteroidal antiinflammatory drug. Quantitative nuclear magnetic resonance analysis of solid formoterol fumarate and its dihydrate. Partitioning of ionizing molecules between aqueous buffers and phospholipid vesicles. Correlation between the acid-base titration and the saturation shakeflask solubilitypH methods. Solid-state characterization of olanzapine polymorphs using vibrational spectroscopy. Conformational study of two polymorphs of spiperone: possible consequences on the interpretation of pharmacological activity. Analysis of amorphous and nanocrystalline solids from their X-ray diffraction patterns. The estimation of relative water solubility for prodrugs that are unstable in water. Pharmaceutical microscalorimetry: recent advances in the study of solid-state materials. Preparation and in vitro evaluation of salts of an antihypertensive agent to obtain slow release. The rule of five revisited: applying log D in place of log P in drug likeness filters.
Following the tape stripping medicine video purchase leflunomide cheap, the remaining epidermal membranes or skin samples may be homogenized symptoms low blood pressure leflunomide 10 mg buy with visa, extracted treatment ear infection generic leflunomide 10 mg with amex, and analyzed for permeant content 5 medications related to the lymphatic system buy 20 mg leflunomide. The results of these studies demonstrate how the developed prototype formulations compare in terms of in vitro skin penetration treatment 101 generic leflunomide 10 mg fast delivery, distribution, and permeation. The criteria for determining the preferred formulation using an in vitro skin model will depend on the particular circumstances. For example, preferred formulations for dermatological activity may demonstrate rapid uptake of drug into the Topical and Transdermal Delivery 497 epidermal layers together with limited transfer of the drug through the skin. On the other hand, preferred vehicles for transdermal systemic delivery will be those demonstrating both rapid penetration into and rapid permeation across the skin. This section concentrates on the formulation development of dermatological and transdermal products and will take into account several important factors in the development process. In many companies, the first stage of product development is the formation of a project team. At the inaugural meeting of this team, it is essential that all members be aware of what is required from both a medical and a marketing point of view. Realistic time schedules should be drawn up with welldefined decision points, and due allowance should be made for the inevitable slippage time. In the early stages of product development, there is usually a bottleneck in the analytical department. Most companies work on the basis of allowing two analysts per formulator, but this ratio is often inadequate. It is of paramount importance that the analytical department be involved in the early stages of the formulation process. These are the people who will have to analyze the prototype formulations and look for evidence of stability problems as soon as possible. They are, of course, capable and equipped to do this, but time will be saved if they know precisely what materials the formulator is planning to include in the prototypes. There is no substitute for a fully validated stability indicating assay, but initial "rough" analytical methods can be extremely useful for determining what excipients to avoid and what conditions, such as pH, are critical to the formulation. Although the medical and marketing departments will have defined targets in terms of disease to be treated, and territories in which the product is to be launched, it is up to the formulator to specify and identify the optimum formulation. This leads to the unsatisfactory, but unavoidable, situation of the formulator trying to obtain a formulation containing a "hypothetical" maximum dose. Throughout the development process, it is important to maintain a high level of quality. Formulation Type the selection of formulation type for systemic transdermal products, which are designed for application to intact non-diseased skin, is guided by the requirement of the system, be it a semisolid, spray, or patch preparation, to deliver therapeutic amounts of drug into the systemic circulation. On the other hand, the selection of formulation type for dermatological products is influenced more by the nature of the skin lesion. As pointed out by Kitson and Maddin (1998): "It is idle to pretend that the therapy for skin diseases, as currently practiced, has its origins in science. For these reasons, the dermatological and transdermal formulator must be skilled in the art and knowledgeable in the science of a variety of formulation types. In general, the preparation of such formulations as poultices and pastes is extemporaneous, and it is unlikely that the industrial pharmaceutical formulator will be required to develop stable, safe, and efficacious products of this type. Solutions and powders lack staying power (retention time) on the skin and can only afford transient relief. In modernday pharmaceutical practice, semisolid formulations are preferred vehicles for 498 Walters and Brain dermatological therapy because they remain in situ and deliver their drug payload over extended periods. In the majority of cases, therefore, the developed formulation will be an ointment, emulsion, or gel. Topical and Transdermal Delivery 499 inactive ingredients in semisolid preparations can be a significant factor in formulation efficacy (Wiechers et al. Ointments In its strictest definitive form, an ointment is classified as any semisolid containing fatty material and intended for external application [U. In practice, ointments are considered to be semisolid anhydrous external preparations. In the 19th century, ointments were based on lard, a compounding material, the usefulness of which was severely limited by its tendency to turn rancid. Early in the 20th century, lard was replaced by petrolatum (white or yellow soft paraffin or petroleum jelly). In present practice, nonmedicated ointments (ointment bases) are used alone, for emollient or lubricating purposes, or in combination with a drug for therapeutic purposes. The anhydrous hydrocarbon bases contain straight or branched hydrocarbon chain lengths ranging from C16 to C30 and may also contain cyclic alkanes. A typical formulation contains fluid hydrocarbons (mineral oils, liquid paraffins) mixed with longer alkyl chain, higher melting point hydrocarbons (white and yellow soft paraffin, petroleum jelly). The difference between white and yellow soft paraffin is simply that the white version has been bleached. Hard paraffin and microcrystalline waxes are similar to the soft paraffins except that they contain no liquid components. These anhydrous mixtures tend to produce formulations, which are greasy and unpleasant to use. The addition of solid components, such as microcrystalline cellulose, can reduce the greasiness. Improved skin feel can also be attained by the incorporation of silicone materials, such as polydimethylsiloxane oil or dimethicones. Silicones are often used in barrier formulations, which are designed to protect the skin against water-soluble irritants. Although the non-medicated anhydrous ointments are extremely useful for emolliency, their value as topical drug delivery platforms is limited by the relative insolubility of many drugs in hydrocarbons and silicone oils. However, it is possible to increase drug solubility within the formulation by incorporating hydrocarbon-miscible solvents, such as isopropylmyristate or propylene glycol, into the ointment. Although increasing the solubility of a drug within a formulation may often decrease the release rate, it does not necessarily decrease the therapeutic effect. It is well accepted that simple determination of release rates from formulations may not be predictive of drug bioavailability. For example, when formulated in a simple white petrolatum/mineral oil ointment, the release rate of betamethasone dipropionate was shown to be considerably higher than when the drug was formulated at the same concentration (0. It is also important to appreciate that various grades of petrolatum are commercially available and that the physical properties of these materials will vary depending on the source and refining process. Slight variations in physical properties of the constituents of an ointment may have significant effects on drug release behavior (Kneczke et al. The preparation of ointment formulations may, at first sight, appear a simple matter of heating all of the constituents to a temperature higher than the melting point of all of the excipients and cooling with constant mixing. The reality, however, is that the process is somewhat more complex and requires careful control over various parameters, especially the cooling rate. Rapid cooling, for example, creates stiffer formulations in which there are numerous small crystallites. On the other hand, a slow cooling rate results in the formation of fewer, but larger, crystallites and a more fluid product. Further information regarding temperature effects and ointment phase behavior can be found in Osborne (1992, 1993) and Pena et al. Gels the common characteristic of all gels is that they contain continuous structures, which provide solid-like properties (Barry, 1983). Depending on their constituents, gels may be clear or opaque and be polar, hydroalcoholic, or nonpolar. Gel viscosity is generally a function of the amount and molecular weight of the added thickener. There are a variety of semisynthetic celluloses in use as thickeners in gel formulations. These celluloses are obtainable in diverse molecular weight grades, and the higher-molecular-weight moieties are used at 1% to 5% (w/w) for gelation. In the preparation of aqueous gels, the cellulose is dissolved in a preheated portion of the required water. On dispersion of the cellulose in the hot water, the remainder of the water is added, cold, and stirred to form the gel. It is useful, when developing prototype gel formulations, to evaluate a variety of different types of cellulose. It is also important to appreciate that some celluloses may exhibit specific incompatibilities with other potential formulation ingredients. Because they are of naturally occurring plant origin, the branched chain polysaccharide gums, such as tragacanth, pectin, carrageenan, and guar, will have widely varying physical properties depending on their source. Viscosity may be enhanced synergistically by the addition of inorganic suspending agents such as magnesium aluminum silicate. Tragacanth, a mixture of water-insoluble and water-soluble Topical and Transdermal Delivery 501 polysaccharides, is negatively charged in aqueous solution and therefore incompatible with many preservatives when formulated at a pH of 7 and above. Similarly, xanthan gum, which is produced by bacterial fermentation, is incompatible with some preservatives but, unlike other gums, it is very stable over a wide range of temperatures and pH. The viscosity of xanthan gum solutions decreases with higher shear rates, but when the shear forces are removed, the product will thicken. Xanthan gum is used to prepare aqueous gels usually in conjunction with bentonite clays, and it is also used in O/W emulsions to help stabilize oil droplets against coalesence. The sodium salt, sodium alginate, is used at 5% to 10% as a gelling agent, and firm gels may be obtained by incorporating small amounts of soluble calcium salts. Many gums are ineffective in hydroalcoholic gels containing greater than 5% alcohol. Nonetheless, ethanol or glycerin is often used as wetting agents to ease aqueous dispersion of the gums. The flat surfaces of bentonite are negatively charged, whereas the edges are positively charged. The clays swell in the presence of water because of hydration of the cations and electrostatic repulsion between the negatively charged faces. Thixotropic gels form at high concentrations because the clay particles combine in a flocculated structure in which the edge of one particle is attracted to the face of another. The rheological properties of these clay dispersions are, therefore, particularly sensitive to the presence of salts. Bentonite, a native colloidal hydrated aluminum silicate (mainly montmorillonite), can precipitate under acidic conditions, and formulations must be at pH 6 or above. A synthetic clay (colloidal silicon dioxide) is also useful for thickening both aqueous and nonpolar gels. The usual concentrations of clay required to thicken formulations is from 2% to 10%. Polymeric materials used in gels and other dermatological formulations are reviewed in Valenta and Auner (2004). By far the most extensively employed gelling agents in the pharmaceutical and cosmetic industries are the synthetic carboxyvinyl polymers known as carbomers. These are highmolecular-weight polymers of acrylic acid cross-linked with either allylsucrose or allyl ethers of pentaerythritol. The most common way is to convert the acidic molecule to a salt by the addition of an appropriate neutralizing agent. For aqueous or polar solvent containing formulations, carbomer gelation can be induced by the addition of simple inorganic bases, such as sodium or potassium hydroxide. Less polar or nonpolar solvent systems may be neutralized with amines, such as triethanolamine or diethanolamine. For example, clear and stable hydroalcoholic gels containing 40% ethanol can be thickened with triethanolamine or tromethamine. Neutralization ionizes the carbomer molecule, generating negative charges along the polymer backbone, and the resultant electrostatic repulsion creates an extended three-dimensional structure. Care must be taken not to under- or overneutralize the formulation as this will result in viscosity or thixotropy changes (Planas et al. Overneutralization will reduce viscosity because the excess base cations screen the carboxy groups and reduce electrostatic repulsion. Using this mechanism, maximum thickening will not be instantaneous, as it is with base neutralization, and may take several hours. Heating will accelerate the process, but the system should not be heated above 708C. The dispersion process may take some time, and many formulators prepare a concentrated stock dispersion of carbomer for dilution. The exact quantity of neutralizing agent to be added depends on the type and equivalent weight (carbomer resins have an approximate equivalent weight of 76). However, differences in batch-to-batch mean molecular weight may result in ´ variations in the rheological characteristics of aqueous dispersions (Perez-Marcos et al. Despite this, carbomer gel rheology remains remarkably stable within the pH range of 5 to 8 (Islam et al. It is possible to modulate the flow behavior and elastic properties of carbomer gels using surfactants (Barreiro-Iglesias et al. The carbomers have an excellent safety profile, are generally regarded as essentially nontoxic and nonirritant materials, and have been extensively used by the pharmaceutical and cosmetic industries. In addition, there is no evidence of hypersensitivity or allergic reactions in humans as a result of topical application. Although aqueous-based formulations remain the most popular form of gel preparation, there has been some interest in the development of nonaqueous gel systems, particularly for water-sensitive drugs (Chow et al. Dependent on the polymer, concentrations of 1% (w/w) to 20% (w/w) are used to provide gelation, and viscosity can also be varied by adjusting the ratio of the hydrophilic solvents. These are two-phase preparations in which one phase (the dispersed or internal phase) is finely dispersed in the other (the continuous or external phase). The dispersed phase can be either hydrophobic based (O/W creams) or aqueous based [water-in-oil (W/O) creams]. Whether a cream is O/W or W/O is dependent on the properties of the system used to stabilize the interface between the phases.
Thus treatment croup buy leflunomide pills in toronto, the presence of pepsin and pancreatin in simulated gastric and intestinal fluids symptoms 20 weeks pregnant 10 mg leflunomide amex, respectively treatment kitty colds order leflunomide online from canada, may be especially important in the dissolution testing of hard gelatine capsules (Digenis et al symptoms 6 days before period quality 20 mg leflunomide. Example 2 the ionic concentration in the test medium can affect both the drug solubility and the release mechanism for modified-release formulations medicine you can take while breastfeeding discount leflunomide 10 mg with visa. Solutes will affect the hydration of the gel matrix and, thereby, affect the drug release rate. It has been shown for such tablets that the correlation to in vivo data can be completely lost by use of inappropriate ionic compositions in the test medium (Abrahamsson et al. To reduce variability in dissolution results due to the test medium, the quality aspects of the dissolution media components that could affect the drug dissolution and release must be identified, and appropriate qualities of the components should be defined. This is especially important for the use of surfactants to provide micellar solubilization in the test medium (Crison et al. Another potential source of variability is impurities in the components that may alter the solubility or catalyze degradation of labile drugs. It is also important to see that the dissolution test medium is stable, that is, the components are not degraded or precipitated during the dissolution test period. This is of no concern for plain buffer systems but is more relevant for complex media including physiological components. Dissolved air in the dissolution medium could, under certain circumstances, be located as air bubbles on the surface of the dosage form or released solid material. This will clearly affect the dissolution process by reducing wetting and the available surface area for dissolution in an uncontrolled way. It is, however, important to realize that the reaeration of deaerated water is a rapid process. Other Study Design Aspects the design aspects of dissolution testing include, primarily, the choice of sampling intervals and number of tablets to be tested. Batch control often includes the testing of 6 individual units, whereas testing for regulatory purposes most often requires the testing of 12 individual units. For example, the biphasic release pattern or a significant lag phase may not be detected if too few samples are collected. Another design aspect of dissolution tests occurs when several parameters in the dissolution test method are varied. This could be the situation when looking for the best correlation to in vivo data, testing the robustness of the dissolution method, or testing the robustness of the dissolution properties of a certain formulation toward different physiological factors. The traditional approach has been to vary one factor at a time, while keeping the others at a constant level. The main disadvantages of this design approach are the numerous experiments needed when many factors have to be investigated, and the risk of suboptimization when there are interactions between different study variables. Statistical experimental design has been applied to dissolution testing during recent years as a method of reducing these problems. For full information regarding design and evaluation of such experiments, statistical textbooks such as Statistics for Experimenters (Box et al. The basic principle of experimental design is to vary all factors concomitantly according to a randomized and balanced design, and to evaluate the results by multivariate analysis techniques, such as multiple linear regression or partial least squares. It is essential to check by diagnostic methods that the applied statistical model appropriately describes the experimental data. Unacceptably poor fit indicates experimental errors or that another model should be applied. If a more complicated model is needed, it is often necessary to add further experimental runs to correctly resolve such a model. An example of a design aimed at validation of a dissolution method is given below (Gottfries et al. Seven factors were included, and each one was tested at two different levels plus one center point. In this case, there were 27 ¼ 128 number of unique experiments that could be performed, excluding the center point, to cover all possible combinations of the low and highlevel settings of the seven different factors. Such a large number of experiments are seldom practically and economically justified. However, in statistical design, it is possible to do fractional designs; that is, a limited number of all possible experiments is chosen according to balanced design. In the present case, only 16 experiments, excluding the center point, were performed, and the settings in all experimental runs are presented in Table 3. The most predominant effects were provided by the stirring rate (St), temperature (T), ionic strength (Ion), the square of T, interaction between St and buffer volume (Buf) and interaction between T and Ion. It is also possible to use an obtained model to predict dissolution results for any experimental setting within the tested domain. In this case, dissolution profiles were simulated for all possible combinations of settings within a series of predetermined limits to determine acceptable limits for methodological variation. Examples of applications of statistical designs for optimizing correlations with in vivo data and for the testing of a formulation under different experimental conditions to elucidate the sensitivity of the drug release toward different physiological factors have also been published (Abrahamsson et al. Assessment of Dissolution Profiles It is often desirable to present the dissolution results by some response variable. For rapidly dissolving dosage forms, it may be sufficient to provide the amount dissolved, for example, at 15 or Biopharmaceutical Support in Formulation Development 259 Table 3 Worksheet Illustrating a Statistical Experimental Design for Evaluating the Effect on the Dissolution of Variations of the Test Conditions in an In Vitro Dissolution Method Experiment no. The height of the bars illustrates the change in response estimated for a relative increase of each factor from the mid-point level to the high level in the factorial design. For dosage forms where it is relevant to study the whole profile, more sophisticated methods are needed, since the use of a single point neglects all other data points. Any model can be applied to in vitro dissolution data and fitted by linear or nonlinear regression, as appropriate. However, a more general equation that is commonly applied to dissolution data is the Weibull equation (Langenbucher, 1976) h i AðtÞ ¼ A1  1 À eÀ½ðtÀtlag Þ=d ð1Þ where A(t) is the amount dissolved at time t, A Two curves differing only in td appear as being stretched or compressed along the time axis. At b values of 0 and 1, the dissolution-time curve follows zero- and first-order kinetics, respectively. This method has the advantage of being applicable to all types of dissolution profiles, and it does not require fitting to any model. The only prerequisite is that data points are available close to the final plateau level. This is often the case when a change has been introduced in the composition, manufacturing process, or manufacturing site. The aim is then to maintain the same dissolution properties as for the original version. Such comparisons of dissolution profiles are performed by calculating a similarity factor, f2, which is calculated as follows from cumulative mean data (Shah et al. À0:5 X 1 n 2 1þ ðRt À Tt Þ Â 100 t¼1 n ð3Þ where n is the number of time points in the dissolution-time curve and Rt and Tt are the cumulative amount dissolved at time t for the reference and test formulation, respectively. The Biopharmaceutical Support in Formulation Development 261 number of time points, n, should be at least 3, but only including one value close to the final plateau level (! An f2-value of 50 corresponds to an average difference between the test and reference curves of 10%. Validation of In Vitro Dissolution Methods High quality and valuable in vitro dissolution tests are obtained by a rational design of test method as described above. However, there are different means to validate the method, that is, to verify that the method functions as intended. These tablets are used to control the dissolution apparatus and to allow it to operate as intended, so that the hydrodynamic conditions are satisfactory. However, it should be noted that certain formulations might be more sensitive to such factors than are the calibrator tablets. Another important aspect in validation of a new dissolution method is to investigate how sensitive the dissolution results of the product, for which the method has been developed, are for minute variations in operating conditions. Examples of factors to consider in such a test are temperature of test medium, rotational speed, volume, sampling procedure, medium compositions, and testing performed by different operators. On the basis of such robustness tests of the method, limits can be defined for acceptable variations of test conditions. Statistical design may be useful to apply in situations such as those demonstrated earlier in this chapter. Comparison of in vitro dissolution results with corresponding in vivo data for different formulations to verify that the in vitro methods predict the in vivo dissolution properties (see sect. However, if not so, the in vivo validity of the method should be investigated at a later stage, especially for modified release formulations and poorly soluble drugs. There are basically two different aspects of the function of the formulation that can be evaluated: (1) rate of drug dissolution and/or release and (2) extent of drug that is made available for absorption. The drug dissolution or release rate will directly determine the absorption rate in cases where this is the rate-limiting step in the absorption process. The importance and the need to study these factors increase if the substance has problematic absorption properties, or if the aim is to develop an advanced formulation, such as 262 Abrahamsson and Ungell a modified release product, or if a dosage form affects the biopharmaceutical properties in any other way. Although all these different types of studies aim to investigate the influence of the dosage form on the rate and extent of absorption, different designs and means to evaluate the data are applied. This chapter describes different ways to assess the formulation function from plasma concentration data obtained in bioavailability studies. For a more basic understanding of pharmacokinetics, specialized textbooks should be consulted, such as Clinical Pharmacokinetics-Concepts and Applications (Rowland and Tozer, 1995). Aspects of Study Design Single-Dose Studies Single-dose studies are most sensitive for evaluation of absorption properties and should generally be used for evaluation of formulations. The main exception is if the regulatory guidelines require repeated dosing studies. The drug should be administered under fasting conditions (overnight) together with 200 mL of water. No food should generally be allowed for four hours after intake, and the subjects should thereafter follow a standardized meal schedule during the study day. Crossover Designs In crossover designs, the same subjects receive test and reference formulations to avoid the influence of any interindividual differences that could affect the plasma concentrationtime profile. A parallel group design could also be used if the interindividual (between subjects) variability of the bioavailability variables is of the same magnitude as the intraindividual (within-subject) variability. Additionally, other standard design principles such as randomization should be used, as described in more detail in statistical textbooks. Washout Period A washout period, that is, a minimum number of days between administration of each formulation, is needed to avoid influence of the previous administration on the plasma concentration profile of the following formulation. As a rule of thumb, the washout period should be at least five times the elimination half-life of the drug under investigation. Reference Formulation In almost all studies, a reference formulation is needed, either as a comparator for assessment of relative performance compared to the test formulation, or as a simple vehicle, to characterize the drug substance pharmacokinetics. Stability of the solution, regarding drug compound degradation and precipitation, is an important factor to verify before study start. Inclusion of a parenteral reference formulation, if feasible, provides additional information, as will be further discussed below. Biopharmaceutical Support in Formulation Development 263 Number of Subjects the number of subjects to be included in the study will be determined by the inherent variability in drug substance pharmacokinetics, the magnitude of effects that are of interest, the desired confidence in conclusions, costs, time, ethical aspects and where relevant, and regulatory guideline recommendations. Three different situations may be identified that require different algorithms to determine the sample size: 1. In this case, the question for the pharmaceutical scientist is how precise the mean estimates of primary variables must be, that is, how wide confidence intervals around the mean are acceptable. In this case, the question is how large a difference between the formulations can be of interest to detect at a certain statistical significance level. The aim is to establish bioequivalence between two formulations by obtaining a confidence interval for the difference within specified limits. In this case, the main question is how large a risk the investigator is willing to take to obtain nonconclusive results. Inclusion of more subjects will decrease the width of the confidence interval and thereby reduce the risk of not meeting the acceptance criteria. However, for an understanding of these calculations, any basic statistical textbook is recommended for the first two cases, and in the case of bioequivalence studies, another reference (Hauschke et al. Plasma Sampling the plasma-sampling schedule has to be designed so that the desired accuracy of the primary bioavailability variables can be obtained. In cases of evaluation of formulation performance, it is crucial to have frequent sampling during the absorption phase. In addition, at least three samples should be obtained during the major terminal elimination phase to obtain a relevant measure of the rate constant for this phase, which is needed for a correct estimate of the extent of absorption. Numerous late plasma samples, when the drug concentration is below the limit of quantification of the bioanalytical assay, should be avoided. Food Food may not only affect drug substance pharmacokinetics, such as first-pass metabolism or drug clearance; it may also influence drug dissolution, or by other means, the function of the dosage form. For example, with food, drug residence time in the stomach will be increased, the pH will be changed, motility will be altered, and bile and pancreatic secretions will increase. All these factors could potentially affect drug release and dissolution from a solid formulation. It is therefore relevant to study the influence of food on rate and extent of drug dissolution/release during development. Such a study should include an oral solution, to allow for a distinction between the effects of food on formulation and drug substance. Since almost all medications are administered in the morning, studies are usually performed together with a breakfast. The composition of the meal has to be well defined, since variations can introduce unwanted variability. Generally, a heavy breakfast (approximately 1000 calories and 50%of the energy content from fat) should be used, since this is supposed to stress potential food effects. Table 4 Example of a Standardized Breakfast to Be Used in Food Interaction Studies 2 Eggs fried in butter 2 Strips of bacon 2 Slices of toast with butter 4 ounces of hash brown potatoes 8 ounces of whole milk 264 Abrahamsson and Ungell Assessments Evaluation of drug plasma concentrations is an indirect way of estimating the rate and amount of drug dissolution and/or absorption. Linear pharmacokinetics within the investigated rate of delivery (dose/time units) of the drug to the body, that is, the plasma concentrationtime profile, should be identical for different doses after correction for dose. The most common reason for nonlinear pharmacokinetics is dose-dependent first-pass metabolism.
Leflunomide 10 mg buy on-line. ENT Manifestations of Migraine 1: dizziness tinnitus sinus pressure but no headache.
References