You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C v...You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C vs<sup> 12</sup>C), nitrogen, and sulfur, among others, in the tissues of a consumer reflects a weighted-average of the isotope ratios in the sources it consumes, after some corrections for the processes of digestion and assimilation. We extended a Bayesian mixing model to infer trophic positions of consumer organisms in a food web in addition to the degree to which distinct resource pools (diet sources) support consumers. The novel features in this work include: 1) trophic level estimation (vertical position in foodweb) and 2) the Bayesian exposition of a biologically realistic model [1] including stable isotope ratios and concentrations of carbon, nitrogen, and sulfur, isotopic fractionations, elemental assimilation efficiencies, as well as extensive use of prior information. We discuss issues of parameter identifiability in the complex and most realistic model. We apply our model to simulated data and to bottlenose dolphins (Tursiops truncatus) feeding on several numerically abundant fish species, which in turn feed on other fish and primary producing plants and algae present in St. George Sound, FL, USA. Finally, we discuss extensions from other work that apply to this model and three important general ecological applications. Online supplementary materials include data, OpenBUGS scripts, and simulation details.展开更多
Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yi...Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin?[1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters.展开更多
AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a...AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a greedy best-first search over a space of Bayesian belief-networks(BN) to find the optimal BN to explain the input dataset, and then infers classification rules from this BN. BRL uses a Bayesian score to evaluate the quality of BNs. In this paper, we extended the Bayesian score to include informative structure priors, which encodes our prior domain knowledge about the dataset. We call this extension of BRL as BRL_p. The structure prior has a λ hyperparameter that allows the user to tune the degree of incorporation of the prior knowledge in the model learning process. We studied the effect of λ on model learning using a simulated dataset and a real-world lung cancer prognostic biomarker dataset, by measuring the degree of incorporation of our specified prior knowledge. We also monitored its effect on the model predictive performance. Finally, we compared BRL_p to other stateof-the-art classifiers commonly used in biomedicine.RESULTS We evaluated the degree of incorporation of prior knowledge into BRL_p, with simulated data by measuring the Graph Edit Distance between the true datagenerating model and the model learned by BRL_p. We specified the true model using informative structurepriors. We observed that by increasing the value of λ we were able to increase the influence of the specified structure priors on model learning. A large value of λ of BRL_p caused it to return the true model. This also led to a gain in predictive performance measured by area under the receiver operator characteristic curve(AUC). We then obtained a publicly available real-world lung cancer prognostic biomarker dataset and specified a known biomarker from literature [the epidermal growth factor receptor(EGFR) gene]. We again observed that larger values of λ led to an increased incorporation of EGFR into the final BRL_p 展开更多
The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because ...The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.展开更多
The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of ...The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.展开更多
The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHP...The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.展开更多
为研究串联系统下多部件应力-强度模型的可靠性问题,基于Kumaraswamy分布,采用极大似然法给出参数及应力-强度模型可靠度的极大似然估计(maximum likelihood estimation,MLE);再利用Jeffreys准则构造无信息先验分布,运用马尔可夫链蒙特...为研究串联系统下多部件应力-强度模型的可靠性问题,基于Kumaraswamy分布,采用极大似然法给出参数及应力-强度模型可靠度的极大似然估计(maximum likelihood estimation,MLE);再利用Jeffreys准则构造无信息先验分布,运用马尔可夫链蒙特卡洛(Markov chain Monte Carlo,MCMC)方法给出参数及应力-强度模型可靠度的贝叶斯估计;最后,利用逆矩估计方法给出参数及应力-强度模型可靠度的逆矩估计(inverse moment estimation,IME)。数值模拟结果表明,在不同系统可靠度及不同样本量条件下,通过对3种估计方法的数值进行比较发现贝叶斯估计效果最好,IME优于MLE。该研究为探讨串联系统多部件应力-强度模型可靠性提供了一定的理论基础。展开更多
文摘You are what you eat (diet) and where you eat (trophic level) in the food web. The relative abundance of pairs of stable isotopes of the organic elements carbon (e.g., the isotope ratio of <sup>13</sup>C vs<sup> 12</sup>C), nitrogen, and sulfur, among others, in the tissues of a consumer reflects a weighted-average of the isotope ratios in the sources it consumes, after some corrections for the processes of digestion and assimilation. We extended a Bayesian mixing model to infer trophic positions of consumer organisms in a food web in addition to the degree to which distinct resource pools (diet sources) support consumers. The novel features in this work include: 1) trophic level estimation (vertical position in foodweb) and 2) the Bayesian exposition of a biologically realistic model [1] including stable isotope ratios and concentrations of carbon, nitrogen, and sulfur, isotopic fractionations, elemental assimilation efficiencies, as well as extensive use of prior information. We discuss issues of parameter identifiability in the complex and most realistic model. We apply our model to simulated data and to bottlenose dolphins (Tursiops truncatus) feeding on several numerically abundant fish species, which in turn feed on other fish and primary producing plants and algae present in St. George Sound, FL, USA. Finally, we discuss extensions from other work that apply to this model and three important general ecological applications. Online supplementary materials include data, OpenBUGS scripts, and simulation details.
文摘Yin [1] has developed a new Bayesian measure of evidence for testing a point null hypothesis which agrees with the frequentist p-value thereby, solving Lindley’s paradox. Yin and Li [2] extended the methodology of Yin [1] to the case of the Behrens-Fisher problem by assigning Jeffreys’ independent prior to the nuisance parameters. In this paper, we were able to show both analytically and through the results from simulation studies that the methodology of Yin?[1] solves simultaneously, the Behrens-Fisher problem and Lindley’s paradox when a Gamma prior is assigned to the nuisance parameters.
基金Supported by National Institute of General Medical Sciences of the National Institutes of Health,No.R01GM100387
文摘AIM To develop a framework to incorporate background domain knowledge into classification rule learning for knowledge discovery in biomedicine.METHODS Bayesian rule learning(BRL) is a rule-based classifier that uses a greedy best-first search over a space of Bayesian belief-networks(BN) to find the optimal BN to explain the input dataset, and then infers classification rules from this BN. BRL uses a Bayesian score to evaluate the quality of BNs. In this paper, we extended the Bayesian score to include informative structure priors, which encodes our prior domain knowledge about the dataset. We call this extension of BRL as BRL_p. The structure prior has a λ hyperparameter that allows the user to tune the degree of incorporation of the prior knowledge in the model learning process. We studied the effect of λ on model learning using a simulated dataset and a real-world lung cancer prognostic biomarker dataset, by measuring the degree of incorporation of our specified prior knowledge. We also monitored its effect on the model predictive performance. Finally, we compared BRL_p to other stateof-the-art classifiers commonly used in biomedicine.RESULTS We evaluated the degree of incorporation of prior knowledge into BRL_p, with simulated data by measuring the Graph Edit Distance between the true datagenerating model and the model learned by BRL_p. We specified the true model using informative structurepriors. We observed that by increasing the value of λ we were able to increase the influence of the specified structure priors on model learning. A large value of λ of BRL_p caused it to return the true model. This also led to a gain in predictive performance measured by area under the receiver operator characteristic curve(AUC). We then obtained a publicly available real-world lung cancer prognostic biomarker dataset and specified a known biomarker from literature [the epidermal growth factor receptor(EGFR) gene]. We again observed that larger values of λ led to an increased incorporation of EGFR into the final BRL_p
文摘The delayed S-shaped software reliability growth model (SRGM) is one of the non-homogeneous Poisson process (NHPP) models which have been proposed for software reliability assessment. The model is distinctive because it has a mean value function that reflects the delay in failure reporting: there is a delay between failure detection and reporting time. The model captures error detection, isolation, and removal processes, thus is appropriate for software reliability analysis. Predictive analysis in software testing is useful in modifying, debugging, and determining when to terminate software development testing processes. However, Bayesian predictive analyses on the delayed S-shaped model have not been extensively explored. This paper uses the delayed S-shaped SRGM to address four issues in one-sample prediction associated with the software development testing process. Bayesian approach based on non-informative priors was used to derive explicit solutions for the four issues, and the developed methodologies were illustrated using real data.
文摘The Goel-Okumoto software reliability model, also known as the Exponential Nonhomogeneous Poisson Process,is one of the earliest software reliability models to be proposed. From literature, it is evident that most of the study that has been done on the Goel-Okumoto software reliability model is parameter estimation using the MLE method and model fit. It is widely known that predictive analysis is very useful for modifying, debugging and determining when to terminate software development testing process. However, there is a conspicuous absence of literature on both the classical and Bayesian predictive analyses on the model. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model. Driven by the requirement of highly reliable software used in computers embedded in automotive, mechanical and safety control systems, industrial and quality process control, real-time sensor networks, aircrafts, nuclear reactors among others, we address four issues in single-sample prediction associated closely with software development process. We have adopted Bayesian methods based on non-informative priors to develop explicit solutions to these problems. An example with real data in the form of time between software failures will be used to illustrate the developed methodologies.
文摘The Goel-Okumoto software reliability model is one of the earliest attempts to use a non-homogeneous Poisson process to model failure times observed during software test interval. The model is known as exponential NHPP model as it describes exponential software failure curve. Parameter estimation, model fit and predictive analyses based on one sample have been conducted on the Goel-Okumoto software reliability model. However, predictive analyses based on two samples have not been conducted on the model. In two-sample prediction, the parameters and characteristics of the first sample are used to analyze and to make predictions for the second sample. This helps in saving time and resources during the software development process. This paper presents some results about predictive analyses for the Goel-Okumoto software reliability model based on two samples. We have addressed three issues in two-sample prediction associated closely with software development testing process. Bayesian methods based on non-informative priors have been adopted to develop solutions to these issues. The developed methodologies have been illustrated by two sets of software failure data simulated from the Goel-Okumoto software reliability model.
文摘为研究串联系统下多部件应力-强度模型的可靠性问题,基于Kumaraswamy分布,采用极大似然法给出参数及应力-强度模型可靠度的极大似然估计(maximum likelihood estimation,MLE);再利用Jeffreys准则构造无信息先验分布,运用马尔可夫链蒙特卡洛(Markov chain Monte Carlo,MCMC)方法给出参数及应力-强度模型可靠度的贝叶斯估计;最后,利用逆矩估计方法给出参数及应力-强度模型可靠度的逆矩估计(inverse moment estimation,IME)。数值模拟结果表明,在不同系统可靠度及不同样本量条件下,通过对3种估计方法的数值进行比较发现贝叶斯估计效果最好,IME优于MLE。该研究为探讨串联系统多部件应力-强度模型可靠性提供了一定的理论基础。