Nozick concludes that this results in a range of objectivity rather than a black and white situation. In this vein, the complicating factors may at different times support or weaken the objectivity of science. It is expressed through her Critical Contextual Empiricism CCE in which she attempts to integrate social context, scientific activity and objectivity.
She argues that science is objective due to the fact that it is a social activity and therefore a public activity Longino, Publicity therefore becomes the key to understanding the sense in which science is objective. For the public nature of the scientific process gives rise to a critical transformation which enhances its objectivity. This criticism limits the role of idiosyncratic subjective elements in scientific knowledge. Thus, it is critical practices institutionalized by scientific communities that often qualify science as objective. The three philosophers support the idea that the objectivity of science derives from intersubjective agreement between the members of a scientific community, which is the result of critical transformation of scientific knowledge and the self-correcting character of science through continuous testing.
Despite the fact that in the making of science it is usually difficult to discern the end of the process and the start of the product, and although idiosyncratic factors may play a role at the start of the scientific research, the scientific products are characterized by a certain degree of impartiality and objectivity. This is what we suggest for science education. Objectivity of science in terms of intersubjective agreement provides students and teachers with a rather realistic image of science: social processes that embed a subjective character and the correspondent scientific product that is rather objective due to critical transformation.
In the next section an episode from the history of science is proposed for highlighting and providing instruction in the issue. In this context, Gilbert was one of those who began to bridge the gap between scholar and experimenter, contributing to the making of a new scientific context, a context which synthesized logic and experiment and would soon became known as the New Experimental Philosophy.
Regarding the cultural and scientific context, it should be noted that the 17 th century saw two new and important elements entering the scientific enterprise: formal organizations of scientists, and periodical publications for the dissemination of scientific information. Gilbert, a scientist of his era, tried to strike a balance between the modern spirit of experimentation and the medieval spirit of speculation Boas, According to some historians, Gilbert was both an experimenter and a speculator, while for others he was both an empirical scientist and a metaphysician Hesse, He showed clearly how science could be fruitfully pursued and how futile much of the work published up to that point had been, work published by authors who simply read what other people had written over the centuries about phenomena which no one had ever bothered checking.
On the other hand Gilbert allowed his theoretical prejudices to colour his experimental reports and hence the conclusions he drew from them Hesse, His explanation of electrical attraction had its origins in moistness. Action at a distance was a type of reversion to the sort of mysticism and magic from which scientists were trying to break free. He noticed that objects coming into contact with an excited electric are likely to fall away from it to the ground, but he believed this to be solely due to the force of the effluvium being spent or vanished.
Although he possessed much more equipment than was required to observe the phenomena, his limiting hypothesis left no room for repulsion between electrics. The reason for that insistence lies in his animism, speculation or personal beliefs. He wished to maintain that lodestone and iron and their properties are alone fundamental and predominant powers in the universe. At the same time, a thorough experimenter, Gilbert strove and managed to separate facts from fictions, as his lasting achievements demonstrate. And all these he did through experimentation.
In the course of conducting his experiments, he found that the attracting force increases as the distance between the electric and the attracted body decreases. Moreover, Gilbert did remarkable work on the methodology of science, examining earlier hypothesis and framing new ones, introducing an experimental methodology into electricity and magnetism. In writing up his own work, he successfully described all his work so it would be open to confirmation by others. In fact, it is fairly accurate to say that his work constitutes the foundations of modern electromagnetic theory. Both were strongly influenced by the socio-cultural milieu of the Renaissance and the Middle Ages as well as personal constraints.
Gilbert, his work and the contradictory methodological approach he took to it, illustrate the issues we are concerned with here: socio-cultural influences on scientific practice and the rather objective—meaning impartial—character of scientific knowledge. A teaching and learning sequence based on this historical episode has been designed to instruct student teachers. We propose it as appropriate for highlighting and instructing the ongoing philosophical debate on the subjectivity of scientific processes and the objectivity—primarily in the sense of the intersubjectivity—of scientific knowledge.
School science is a consensual body of knowledge. This knowledge is objective in the sense that the scientific community has determined its validity after extended open criticism. Of course, as researchers in the field of science education suggest, students should have the opportunity to see that there is also a subjective element in the practice of science. Science has its mechanisms—crucially, peer review and critical transformation—for ensuring that such subjective biases are limited. Our aim is to draw attention to the differences between the scientific process, which includes subjective elements, and the scientific output, which is arguably objective, providing for a better understanding of the NoS in the context of science education.
At the start, students let us know their views on the topic. The social face of science was revealed, where its sociality is reflected in the communication between scientists and communities. Moreover, students came to the conclusion that peer review, openness to dialogue and continuous testing make science impartial and objective, despite the factors that may influence it. Students were also encouraged to apply their new mental frames to additional real-world contemporary issues in order to see how science works or should work in our era.
In this paper, we have not set out to provide the ultimate contribution to the discussion regarding objectivity and subjectivity in science. Objectivity and subjectivity in science are an issue upon which numerous philosophers and researchers within science education community have expressed their views. The concept of intersubjectivity, which emerged from the field of epistemology, relates to objectivity of science and can shed light on science education.
Intersubjectivity is the idea which premises peer reviewing and critical transformation as a means of allowing a number of different subjects to agree upon a certain topic. In this way, the sociality of the research is assured and the objectivity of the output is increased. Such an approach seems to be helpful in clearing some of the notions of objectivity and subjectivity of science in science education literature and practice.
Science is to a certain degree objective, not in spite of its sociality but because of it. This fits with the wider picture of science, according to that science is culturally and socially influenced while simultaneously seeking to decrease impartiality and bias through its practices. Abd-El-Khalick, F. Science Teacher Education, 88, Project Benchmarks for Science Literacy. New York: Oxford University Press. Feminist Epistemology and Philosophy of Science. A Contextualist Theory of Epistemic Justification. American Philosophical Quarterly, 15, Thomas Kuhn. Bacon and Gilbert. Journal of the History of Ideas, 12, International Journal of Science Education, 13, Can Science Be Objective?
Hypatia, 8, Science Framework for California Public Schools. Sacramento: California Department of Education. Duschl, R. Eflin, J. Journal of Research in Science Teaching, 36, CO; Gilbert, W. De Magnete.
New York: Dover Publications. Grene, M. Leiden: Martinus Nijhoff Publishers. Guzey, S. Contemporary Issues in Technology and Teacher Education , 9 1. This study examines the development of technology, pedagogy, and content knowledge TPACK in four in-service secondary science teachers as they participated in a professional development program focusing on technology integration into K classrooms to support science as inquiry teaching. Science teaching is such a complex, dynamic profession that it is difficult for a teacher to stay up-to-date. For a teacher to grow professionally and become better as a teacher of science, a special, continuous effort is required Showalter, , p.
To better prepare students for the science and technology of the 21st century, the current science education reforms ask science teachers to integrate technology and inquiry-based teaching into their instruction American Association for the Advancement of Science, ; National Research Council [NRC], , Teaching science as emphasized in the reform documents, however, is not easy. In this light, the professional development program, Technology Enhanced Communities TEC , which is presented in this paper, was designed to create a learning community where science teachers can learn to integrate technology into their teaching to support student inquiry.
Since a situated learning environment supports collaboration among participants Brown et al. The situated learning theory was used as a design framework for TEC, but technology, pedagogy, and content knowledge TPACK was employed as a theoretical framework for the present study. Many new educational technology tools are now available for science teachers. Teachers need ongoing support while they make efforts to develop and sustain effective technology integration. Professional learning communities, where teachers collaborate with other teachers to improve and support their learning and teaching, are effective for incorporating technology into teaching Krajcik et al.
Technology integration is most commonly associated with professional development opportunities. Zeichner, for example, argued that teacher action research is an important aspect of effective professional development. According to Zeichner, to improve their learning and practices, teachers should become teacher researchers, conduct self-study research, and engage in teacher research groups. These collaborative groups provide teachers with support and opportunities to deeply analyze their learning and practices. Shulman defined seven knowledge bases for teachers: content knowledge, general pedagogical knowledge, curriculum knowledge, pedagogical content knowledge PCK , knowledge of learners and their characteristics, knowledge of educational context, and knowledge of educational ends, goals, and values.
According to Shulman, among these knowledge bases, PCK plays the most important role in effective teaching. Thus, to make effective pedagogical decisions about what to teach and how to teach it, teachers should develop both their PCK and pedagogical reasoning skills.
Koehler and Mishra argued that for effective technology integration all three knowledge elements content, pedagogy, and technology should exist in a dynamic equilibrium. According to McCrory, science teachers need to possess adequate knowledge of science to help students develop understandings of various science concepts. Having adequate pedagogical knowledge allows teachers to teach effectively a particular science concept to a particular group of students.
Furthermore, well-developed knowledge of technology allows teachers to incorporate technologies into their classroom instruction. TEC was designed to help secondary science teachers develop necessary knowledge and skills for integrating technology for science-as-inquiry teaching. TEC was a yearlong, intensive program, which included a 2-week-long summer introductory course about inquiry teaching and technology tools and follow-up group meetings throughout the school year associated with an online course about teacher action research.
A LeMill community Web site was created at the beginning of the program. Participant teachers created accounts and joined the TEC community Web site. Through this Web site, teachers interacted with the university researchers and their colleagues and were able to share and discuss lesson resources. Teachers engaged in inquiry-based activities while they were learning these technology tools. For example, teachers implemented a cookbook lab experiment about the greenhouse effect following the procedure given by the university educators.
Teachers then modified this activity to be inquiry based. Through implementation, discussions, and reflections, teachers developed their understanding of inquiry and effectiveness of technology tools in student learning and inquiry. Throughout the entire program teachers were encouraged to reflect on their classroom practices. Teachers each wrote about their experiences with technology tools and inquiry in their blogs on the LeMill community Web site. After learning about technology tools, teachers created lesson plans that included technology tools and loaded these lesson plans onto the LeMill Web site.
Furthermore, each teacher developed a technology integration plan to follow in the subsequent school year. During the school year, the teachers and the university educators met several times to discuss the constraints teachers had experienced in the integration of technology to practice reform-based science instruction.
In addition, during the school year teachers used the LeMill site to ask questions, share lesson plans and curricula, and reflect on their teaching. In the online discussions and face-to-face meetings, the members of the learning community, the teachers and the university educators, engaged in numerous conversations about how to overcome these barriers e.
In spring , the teachers were formally engaged in teacher action research. They designed and conducted action research studies to reflect upon their practices and learning about technology.
Table of Contents
During this phase, university educators and teachers worked collaboratively. Teachers each prepared a Google document with their action research report and shared it with university educators and other teachers. The researchers provided necessary theoretical knowledge for teachers to design their studies.
Conducting action research allowed teachers to see the effectiveness of using technology tools in student learning. During this phase, the collaboration among teachers and the university educators fostered the growth of the learning community. The teachers in this study were the participants in the TEC professional development program that focused on technology integration in science classrooms.
Eleven secondary science teachers enrolled in the program. These teachers had varying levels of teaching experience, ranging from 1 to 17 years. Five of them were experienced and 6 of them were beginning secondary science teachers. Only beginning teachers were invited to participate in the present study since they had more commonalities with each other than with experienced teachers. For example, the beginning teachers all graduated from the same teacher education program and were all teaching their academic specialty.
The teachers had recently completed preservice coursework focused on inquiry-based teaching and implementing science instruction with technology tools. The other two beginning teachers did not participate in the study, as they did not have enough time to devote to the research study. More information about teachers can be found in Table 1. Pseudonyms are used for all teacher participants. In this study, triangulation was achieved through the various techniques of data collection as in Patton, Electronic surveys were sent to teachers four times during the program.
To find what, when, and how teachers used technology tools and inquiry-based teaching during the fall semester, we sent a survey at the end of the semester. Finally, after completing the online course, teachers received another survey that included questions about their overall experience in the program, what they learned, and how they applied their knowledge in their instruction.
Interviews were conducted at the beginning and end of the summer program. Questions included were a How do your students learn science best? Teachers were required to write a technology integration plan at the end of the summer course. In their plans, teachers explained in what ways, when, and how they could use technology tools in their classrooms during the upcoming school year.
In addition, in their plans teachers talked about the constraints they might face while integrating technology into their teaching and how they could overcome these obstacles. Teachers were observed in their classrooms at least two times during the school year. Observations were deliberately scheduled during a time when the teacher was using technology.
Teacher artifacts such as lesson plans and student handouts were also collected. During spring , each teacher designed and conducted action research studies. Teachers reflected on their practices by identifying their own questions, documenting their own practices, analyzing their findings, and sharing their findings with university educators and other teachers. A range of topics were addressed by the teachers. Many teachers, for example, focused on impact of a particular technology tool e.
As the incidents were coded, we compared them with the previous incidents that coded in the same category to find common patterns, as well as differences in the data as in Glaser, As discussed in Merriam , categories emerging from the data were exhaustive, mutually exclusive, sensitizing, and conceptually congruent and reflected the purpose of the study. For example, the following categories were created for participant Cassie: misunderstanding of inquiry, lack of technological resources, unwillingness to change, mixed beliefs about technology, feeling of isolation, undeveloped conception of science, and weak teacher-student relationships.
At this time, we wrote case studies for each teacher based on the most salient categories that provided memos. The emergent salient categories were previous experiences with technology; beliefs about teaching, learning, and technology; the use of technology in classroom instruction; and the implementation of inquiry-based teaching.
Case studies were written as recommended in Yin In the last phase of the analysis, we defined major themes derived from the data. At the end of the program, the participant teachers of this study, Jason, Brenna, Matt, and Cassie met all the requirements for completing the program. However, teachers were each found to integrate technology into their teaching to various degrees. Jason was a first-year teacher at a suburban high school. He taught 9th- and 10th-grade biology.
Before participating in the program, Jason had some experience with technology tools. He felt comfortable using concept mapping tools CMap and Inspiration , temperature and pH probeware, and digital microscopes. At the end of the summer course, Jason designed a technology integration plan, in which he specifically explained which technology tools he was planning to use during the school year.
Jason was excited to use VeeMaps and CMap tools in his classroom. They are much better at helping students clarify their previous knowledge, experimental procedure and implications of their work. As a beginning teacher Jason could not make effective decisions about how and when to use VeeMaps.
TEC had been his first experience with the concept of VeeMaps, and he did not feel comfortable using them in his classroom. On the other hand, Jason used CMaps once a month in his instruction. Results of this study encouraged Jason to use this tool more frequently in the next teaching year. In addition to these tools, Jason created a Web site on his school server. He posted all his notes online for students to access. His students submitted their homework electronically. Since Jason had limited access to the probeware in his school, he did not incorporate it into his teaching.
Jason was an advocate of inquiry-based teaching. Whether small guided activities or full inquiry labs, inquiry-based instruction is important to implement in place of typical cookbook labs. During the program, Jason learned how to turn the cookbook labs into inquiry activities. Jason had a rigid conception of inquiry. For him, all inquiry lessons, technology integrated or not, should allow students to. Student experiments should reduce their investigation into a single variable. In the observed inquiry lesson on bacteria, students investigated antibacterial products on strains of bacterial colonies.
Students posed their own research questions; they set up experiments and then tested variables such as detergent, soap, and toothpaste on bacterial growth. This inquiry activity did not involve any technology tools. Brenna was a second-year teacher at a suburban middle school. She taught eighth-grade Earth science. Prior to participating in the program, Brenna did not have much previous experience with many of the basic technology tools. She was not comfortable with using computers for sharing and collaboration. However, she knew about probeware, Google Earth, and CMap tools.
She had not used many of the tools previously since she did not know how to solve technology-related problems. Before participating in the program, Brenna used only Powerpoint presentations and some Google Earth demos in her teaching. After learning various tools in the program, Brenna decided to create a 3-year technology integration plan. For example, in an observed lesson, Brenna asked her students to design their density lab in which they compare the density of different materials of their choice. Brenna provided many materials, such as vinegar, vegetable oil, and irregular shapes of solids like pennies and rocks.
In their VeeMaps students wrote hypotheses, a list of new words, procedures, results, and conclusions of their experiments. Brenna was also observed while she used clickers in her teaching. Clickers, also known as student response systems or classroom response systems, help teachers create interactive classroom environments. In her classroom, Brenna used clickers to get information about student learning. This approach allowed Brenna to see student feedback in real time and address the areas where students had difficulty understanding.
Even though Brenna integrated many of the technology tools that she learned in the program, she felt that she still needed more training with technology. She was not comfortable with using many of the tools. For example, during one of the observed classes, Brenna used a PowerPoint presentation when suddenly the computer screen turned black. Brenna could not figure out how to solve the problem. Ten minutes later, she sent a student to the administration office to find the technology teacher and asked him for help.
While waiting for the technology teacher to come and fix the problem, a student offered Brenna help to figure out the problem. The student found that the computer turned off since Brenna forgot to plug in the power cord. After the minute long chaos, Brenna fixed the problem and then continued her lesson. Another concern that Brenna had was that she needed more time creating technology-enhanced curriculum units. Brenna thought that collaboration among her colleagues might help her to create technology-rich lesson plans because it was time consuming otherwise. Brenna implemented a few inquiry activities in her classroom.
According to her, she took the ordinary labs that she implemented before and changed parts of them to be more inquiry based. In addition, during the inquiry activities rather than facilitating students Brenna was mostly directing them on what to do and what not to do. Matt was a third-year science teacher in a private middle school. He taught eighth-grade physical science and life science.
But scientific results should certainly not depend on researchers' personal preferences or idiosyncratic experiences. That, among other things, is what distinguishes science from the arts and other more individualistic human activities—or so it is said. Paradigmatic ways to achieve objectivity in this sense are measurement and quantification. What has been measured and quantified has been verified relative to a standard. The truth, say, that the Eiffel Tower is meters tall is relative to a standard unit and conventions about how to use certain instruments, so it is neither aperspectival nor free from assumptions, but it is independent of the person making the measurement.
Measurement is often thought to epitomize scientific objectivity, most famously captured in Lord Kelvin's dictum. Kelvin Measurement can certainly achieve some independence of perspective. Measurement instruments interact with the environment, and so results will always be a product of both the properties of the environment we aim to measure as well as the properties of the instrument. Instruments, thus, provide a perspectival view on the world cf. Giere Moreover, making sense of measurement results requires interpretation. Consider temperature measurement.
It was argued, eventually, that if a thermometer was to be reliable, different tokens of the same thermometer type should agree with each other, and the results of air thermometers agreed the most. Moreover, the procedure yielded at best a reliable instrument, not necessarily one that was best at tracking the uniquely real temperature if there is such a thing.
What Chang argues about early thermometry is true of measurement more generally. Measurements are always made against a backdrop of metaphysical presuppositions, theoretical expectations and other kinds of belief. Whether or not any given procedure is regarded as adequate depends to a large extent on the purposes pursued by the individual scientist or group of scientists making the measurements.
Especially in the social sciences, this often means that measurement procedures are laden with normative assumptions, i. Julian Reiss , has argued that economic indicators such as consumer price inflation, gross domestic product and the unemployment rate are value-laden in this sense. Consumer-price indices, for instance, assume that if a consumer prefers a bundle x over an alternative y , then x is better for her than y , which is as ethically charged as it is controversial.
National income measures assume that nations that exchange a larger share of goods and services on markets are richer than nations where the same goods and services are provided by the government, which too is as ethically charged and controversial. While not free of assumptions and values, the goal of many measurement procedures remains to reduce the influence of personal biases and idiosyncrasies.
The Nixon administration, famously, indexed social security payments to the consumer-price index in order to eliminate the dependence of security recipients on the flimsiest of party politics: to make increases automatic instead of a result of political negotiations Nixon Lorraine Daston and Peter Galison refer to this as mechanical objectivity. They write:. Finally, we come to the full-fledged establishment of mechanical objectivity as the ideal of scientific representation.
What we find is that the image, as standard bearer of is objectivity is tied to a relentless search to replace individual volition and discretion in depiction by the invariable routines of mechanical reproduction. Daston and Galison In truth I am no more than an automaton that registers, without judgment and as exactly as possible, the dictate of my subconscious: my dreams, hypnagogic images and visions, and all the concrete and irrational manifestations of the dark and sensational world discovered by Freud.
Mechanical objectivity reduces the importance of human contributions to scientific results to a minimum, and therefore enables science to proceed on a large scale where bonds of trust between individuals can no longer hold Daston Trust in mechanical procedures thus replaces trust in individual scientists. In his book Trust in Numbers , Theodore Porter pursues this line of thought in great detail.
In particular, on the basis of case studies involving British actuaries in the mid-nineteenth century, of French state engineers throughout the century, and of the US Army Corps of Engineers from to , he argues for two causal claims. First, measurement instruments and quantitative procedures originate in commercial and administrative needs and affect the ways in which the natural and social sciences are practiced, not the other way around. The mushrooming of instruments such as chemical balances, barometers, chronometers was largely a result of social pressures and the demands of democratic societies.
Second, he argues that quantification is a technology of distrust and weakness, and not of strength. It is weak administrators who do not have the social status, political support or professional solidarity to defend their experts' judgments. They therefore subject decisions to public scrutiny, which means that they must be made in a publicly accessible form.
The National Academy of Sciences has accepted the principle that scientists should declare their conflicts of interest and financial holdings before offering policy advice, or even information to the government. And while police inspections of notebooks remain exceptional, the personal and financial interests of scientists and engineers are often considered material, especially in legal and regulatory contexts.
Strategies of impersonality must be understood partly as defenses against such suspicions […]. Objectivity means knowledge that does not depend too much on the particular individuals who author it. Porter Measurement and quantification help to reduce the influence of personal biases and idiosyncrasies and they reduce the need to trust the scientist or government official, but often at a cost.
Standardizing scientific procedures becomes difficult when their subject matters are not homogeneous, and few domains outside fundamental physics are. Attempts to quantify procedures for treatment and policy decisions that we find in evidence-based practices are currently transferred to a variety of sciences such as medicine, nursing, psychology, education and social policy. However, they often lack a certain degree of responsiveness to the peculiarities of their subjects and the local conditions to which they are applied see also section 5.
Moreover, the measurement and quantification of characteristics of scientific interest is only half of the story. We also want to describe relations between the quantities and make inferences using statistical analysis. Statistics thus helps to quantify further aspects of scientific work. We will now turn to the question whether or not statistical analysis can proceed in a way free from personal biases and idiosyncrasies. The appraisal of scientific evidence is traditionally regarded as a domain of scientific reasoning where the ideal of scientific objectivity has strong normative force, and where it is also well-entrenched in scientific practice.
Episodes such as Galilei's observations of the Jupiter moons, Lavoisier's calcination experiments, and Eddington's observation of the eclipse are found in all philosophy of science textbooks because they exemplify how evidence can be persuasive and compelling to scientists with different backgrounds. Inferential statistics—the field that investigates the validity of inferences from data to theory—tries to answer this question.
It is extremely influential in modern science. In section 3. Carnap is interested in determining the degree of confirmation of a hypothesis relative to a given set of observations. In other words, the degree of confirmation of hypothesis H relative to E is the conditional probability of H given E.
But is this probability an objective quantity and free from personal bias? Not all are equally suited: for example, assigning equal weight to all complete state descriptions would not allow for learning from experience. But these comments miss the point: they neglect the subjective choices on which the inductive inferences are built. Carnap's approach is objective in so far as the degree of confirmation is intersubjectively compelling once a logical language and appropriate symmetry principles for this language are agreed upon; however, it is subjective in the sense that rational agents may disagree on these principles.
See also the entry on inductive logic. Closely related to Carnap's logical probability framework is the Bayesian approach to confirmation and evidence, developed first by Frank Ramsey. It is outspokenly subjective: probability is used for quantifying a scientist's subjective degrees of belief in a particular hypothesis. These degrees of belief are changed by conditionalization on observed evidence E and making use of Bayes' Theorem :.
These days, the Bayesian approach is extremely influential in philosophy, but also in scientific disciplines such as statistics, economics, and biology. The difference between the Bayesian approach and Carnap's lies in the philosophical motivation, and in the different understanding of confirmation judgments: for Carnap, they are primarily a consequence of certain logical ways of carving the world at its joints; for the Bayesian, they express a genuinely subjective uncertainty judgment. See also the entry on Bayes theorem. Can we ground the objectivity of scientific evidence in a framework that is explicitly based on personal attitudes?
Bayesians have supplied several arguments to the effect that subjective probability is not equal to personal bias, which we will review in turn. As argued by Howson and Howson and Urbach , the Bayesian's aim is not to determine an intersubjectively binding degree of confirmation, but to provide sound inference rules for learning from experience. In the same way that deductive logic does not judge the correctness of the premises but just advises you what to infer from them, Bayesian inductive logic tells you how to change your own attitudes as soon as you encounter evidence.
All other updating rules are susceptible to so-called Dutch books : betting based on following such rules will lead to sure monetary losses. In addition, convergence theorems guarantee that, as long as novel evidence keeps coming in, the degrees of belief of agents with very different initial attitudes will finally converge Gaifman and Snir However, one may object that the real problem does not lie with the internal soundness of the updating process, but with the choice of an appropriate prior, which may be beset with idiosyncratic bias and manifest social values.
Modern Bayesians prefer to measure degree of confirmation in terms of the increase in degree of belief that evidence E confers on hypothesis H , rather than the probability of H conditional on E. Several philosophers have argued that there is just one reasonable measure of confirmation as increase in degree of belief that satisfies a set of generally accepted desirable constraints e. If one of these arguments were sound, then the incremental degree of confirmation would provide a bias-free assessment of the evidence.
For solving applied problems, however, Bayesian statisticians almost uniformly use the Bayes factor , that is, the ratio of prior to posterior odds in favor of a hypothesis. The Bayesian approach can eliminate personal bias by imposing additional constraints on an agent's rational degrees of belief.
One way to do so consists in adopting the Principle of Maximum Entropy or MaxEnt, going back to Jaynes and developed philosophically by Jon Williamson Williamson retains the demand that degrees of belief satisfy the probability axioms, but dismisses the updating by conditionalization in equation 2. Instead, he demands that the agent's degrees of belief must be in sync with empirical constraints and that, conditional on these constraints, they must be equivocal , that is, as middling as possible.
This latter constraint amounts to maximizing the entropy of the probability distribution in question. Since maximizing the right hand side in 4 leads to a unique solution, subjective bias is eliminated. Instead of going for MaxEnt, one could also assume objective priors , that is, prior probabilities which do not represent an agent's factual attitudes, but are determined by principles of symmetry, mathematical convenience or maximizing the influence of the data on the posterior e. In general, however, the practical achievements of objective Bayesian approaches come at the expense of weakening their philosophical foundations e.
Thus, Bayesianism provides not more than a partial answer to securing scientific objectivity from personal idiosyncrasy. On the other hand, the objections to the above proposals are no knockout arguments, and further developments of the discussed approaches may help to reconcile transparency about subjective assumptions with objectivity in interpreting statistical evidence. That said, one may argue that the theories we discussed so far all miss the target.
Bayesians primarily address the question of which theories we should rationally believe in. The decision procedures reviewed in section 3. Both analyze the concept of statistical evidence from the vantage point of their primary focus—beliefs and decisions. But can't we quantify the support for or against a hypothesis in an intersubjectively compelling way, without buying into a subjectivist or a behavioral framework? This is the ambition of frequentist and likelihood-based explications of scientific evidence.
The frequentist conception of evidence is based on the idea of the statistical test of a hypothesis. Under the influence of the statisticians Neyman and Pearson, tests were often regarded as rational decision procedures that minimize the relative frequency of wrong decisions in a hypothetical series of repetitions of a test. As we have seen in section 3.
Moreover, the losses associated with erroneously accepting or rejecting that hypothesis depend on the context of application which may be unbeknownst to the experimenter. This speaks for a division of labor where scientists restrict themselves to an evidential interpretation of statistical tests , and leave the actual decisions to policy-makers and regulatory agencies. Such an approach has been developed by Ronald A. Fisher — , and it has become the orthodox solution to statistical inference problems.
In other words, if a result has lower probability under the null hypothesis H than most other possible results, then it undermines the tenability of H :. Fisher Then, the strength of evidence against the tested hypothesis is equal to the p-value —the probability of obtaining a result that is as least as extreme as the actually observed data. Figure 1 gives a graphical illustration.
This probability measures how strongly E speaks against H , compared to other possible results, and the lower it is, the stronger the evidence against H. Conventionally, a p-value smaller than. This concept of evidence is apparently objective, but beset with a variety of problems see Sprenger for a detailed discussion. There is no intersubjectively compelling justification why this or any other particular standard of evidence should be used in order to quantify the concept of significance.
From an institutional point of view, the frequentist conception of p-values is problematic as well. What is more, even in the absence of a causal relation between two quantities, one may find a significant and therefore publishable result by pure chance. The probability that this happens by accident is equal to the statistical significance threshold i.
Ioannidis therefore concludes that most published research findings are false —an effect partially due to the frequentist logic of evidence. Indeed, researchers often fail to replicate findings by another scientific team, and periods of excitement and subsequent disappointment are not uncommon in frontier science. Finally, there is a principled philosophical objection against the objectivity of frequentist evidence: the sample space dependence.
That is, in frequentist statistics, the strength of the evidence depends on which results could have been observed but were not observed. For instance, the post-experimental assessment of the evidence has to be changed when we learn about a defect in our measurement instrument, even if that defect is not relevant for the range of the actually observed results!
On a Bayesian reading, this implies that frequentist evidence statements depend on the intentions of the experimenter Edwards, Lindman and Savage ; Sprenger : Would she have continued the trial if the results had been different? How would she have reacted to unforeseen circumstances? Freedom from personal bias seems hard to realize if one's inference depends on the answer to such questions.
A middle ground between frequentist and Bayesian inference is provided by likelihoodist inference, based on Alan Turing and I. This is because the probabilities of the actual evidence E under the competing hypotheses are called the likelihoods of H on E. Therefore, a minority of statisticians e. However, the likelihoodist cannot use subjective probability in order to transform a composite hypothesis into a simple one.
Summing up our findings, no statistical theory of evidence manages to eliminate all sources of personal bias and idiosyncrasy.
Reflections on science-society relationship Improving scientific communication and practice
The Bayesian is honest about it: she considers subjective assumptions to be ineliminable from scientific reasoning. This does not rule out that constrastive aspects of statistical evidence may be quantified in an objective way, e. The frequentist conception based on p-values still dominates statistical practice, but it suffers from several conceptual drawbacks, and in particular the misleading impression of objectivity. This also has far-reaching implications for fields such as evidence-based medicine, where randomized controlled trials the most valuable source of evidence are typically interpreted in a frequentist way.
A defense of frequentist inference should, in our opinion, stress that the relatively rigid rules for interpreting statistical evidence facilitate communication and assessment of research results in the scientific community—something that is harder to achieve for a Bayesian. So far everything we discussed was meant to apply across all or at least most of the sciences. In this section we will look at a number of specific issues that arise in the social science, in economics, and in evidence-based medicine. There is a long tradition in the philosophy of social science maintaining that there is a gulf in terms of both goals as well as methods between the natural and the social sciences.
See also the entries on hermeneutics and Max Weber. Understood this way, social science lacks objectivity in more than one sense. One of the more important debates concerning objectivity in the social sciences concerns the role value judgments play and, importantly, whether value-laden research entails claims about the desirability of actions. Max Weber held that the social sciences are necessarily value laden. However, they can achieve some degree of objectivity by keeping out the social researcher's views about whether agents' goals are commendable.
In a similar vein, contemporary economics can be said to be value laden because it predicts and explains social phenomena on the basis of agents' preferences. Nevertheless, economists are adamant that economists are not in the business of telling people what they ought to value. All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view. The reason for this is twofold. First, social reality is too complex to admit of full description and explanation.
So we have to select. This is because, second, in the social sciences we want to understand social phenomena in their individuality, that is, in their unique configurations that have significance for us. Values solve a selection problem. They tell us what research questions we ought to address because they inform us about the cultural importance of social phenomena:.
Only a small portion of existing concrete reality is colored by our value-conditioned interest and it alone is significant to us. It is significant because it reveals relationships which are important to use due to their connection with our values. It is important to note that Weber did not think that social and natural science were different in kind, as Dilthey and others did.
- Issue Archives.
- A survey on communication networks for electric system automation.
- 1. Introduction;
- Recommended for you;
Social science too examines the causes of phenomena of interest, and natural science too often seeks to explain natural phenomena in their individual constellations. The role of causal laws is different in the two fields, however. Whereas establishing a causal law is often an end in itself in the natural sciences, in the social sciences laws play an attenuated and accompanying role as mere means to explain cultural phenomena in their uniqueness.
Nevertheless, for Weber social science remained objective in at least two ways. First, once research questions of interest have been settled, answers about the causes of culturally significant phenomena do not depend on the idiosyncrasies of an individual researcher:. Weber a : 84, emphasis original.
The claims of social science can therefore be objective in our third sense see section 4. Given a policy goal, a social scientist could make recommendations about effective strategies to reach the goal; but social science was to be value-free in the sense of not taking a stance on the desirability of the goals themselves. This leads us to our conception of objectivity as freedom from values. Contemporary mainstream economists hold a view concerning objectivity that mirrors Max Weber's see above. On the one hand, it is clear that value judgments are at the heart of economic theorizing.
Preferences are evaluations. Thus, to the extent that economists predict and explain market behavior in terms of rational choice theory, they predict and explain market behavior in a way laden with value judgments. Optimality is determined by the agent's desires, not the converse.
Paternotte —8. However, standard economics has no therapeutic ambition, i. Economics cannot distinguish between choices that maximize happiness, choices that reflect a sense of duty, or choices that are the response to some impulse. Moreover, standard economics takes no position on the question of which of those objectives the agent should pursue. Gul and Pesendorfer 8. According to the standard view, all that rational choice theory demands is that people's preferences are internally consistent; it has no business in telling people what they ought to prefer, whether their preferences are consistent with external norms or values.
Economics is thus value-laden, but laden with the values of the agents whose behavior it seeks to predict and explain and not with the values of those who seek to predict and explain this behavior. Whether or not social science, and economics in particular, can be objective in this—Weber's and the contemporary economists'—sense is controversial. On the one hand, there are some reasons to believe that rational choice theory which is at work not only in economics but also in political science and other social sciences cannot be applied to empirical phenomena without referring to external norms or values Sen ; Reiss On the other hand, it is not clear that economists and other social scientists qua social scientists shouldn't participate in a debate about social goals.
For one thing, trying to do welfare analysis in the standard Weberian way tends to obscure rather than to eliminate normative commitments Putnam and Walsh Obscuring value judgments can be detrimental to the social scientist as policy adviser because it will hamper rather than promote trust in social science. For another, economists are in a prime position to contribute to ethical debates, for a variety of reasons, and should therefore take this responsibility seriously Atkinson Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiological rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.
Guyatt et al. But proponents of evidence-based practices have a much narrower concept of evidence in mind: analyses of the results of randomized controlled trials RCTs. This movement is now very strong in biomedical research, development economics and a number of areas of social science, especially psychology, education and social policy, especially in the English speaking world. The goal is to replace subjective biased, error-prone, idiosyncratic judgments by mechanically objective methods. But, as in other areas, attempting to mechanize inquiry can lead to reduced accuracy and utility of the results.
Causal relations in the social and biomedical sciences hold on account of highly complex arrangements of factors and conditions. Whether for instance a substance is toxic depends on details of the metabolic system of the population ingesting it, and whether an educational policy is effective on the constellation of factors that affect the students' learning progress.
If an RCT was conducted successfully, the conclusion about the effectiveness of the treatment or toxicity of a substance under test is certain for the particular arrangement of factors and conditions of the trial Cartwright But unlike the RCT itself, many of whose aspects can be relatively mechanically implemented, applying the result to a new setting recommending a treatment to a patient, for instance always involves subjective judgments of the kind proponents of evidence-based practices seek to avoid—such as judgments about the similarity of the test to the target or policy population. While unbalanced allocations can certainly happen by chance, randomization still provides some warrant that the allocation was not done on purpose with a view to promoting somebody's interests.
A priori , the experimental procedure is thus more impartial with respect to the interests at stake. It has thus been argued that RCTs in medicine, while no guarantor of the best outcomes, were adopted by the U. Food and Drugs Administration FDA to different degrees during the s and s in order to regain public trust in its decisions about treatments, which it had lost due to the thalidomide and other scandals Reiss and Teira ; Teira It is important to notice, however, that randomization is at best effective with respect to one kind of bias, viz.
Important other epistemic concerns are not addressed by the procedure but should not be ignored Worrall We have seen in sections 2. We want scientific objectivity because and to the extent that we want to be able to trust scientists, their results and recommendations. One possible lesson to draw from the fairly poor success record of the proposed conceptions of scientific objectivity is that these conceptions have the logical order of the ideas mistaken.
The obvious alternative is to reverse that order, start with what we want and then look for features that might promote the thing in which we are ultimately interested. Fine That is, anything goes—as long as the practice promotes trust in science. In contraposition to the three traditional alternatives, we may call this conception instrumentalism about scientific objectivity.
From an instrumentalist point of view, defining the objective features of scientific inquiry becomes an empirical and contextual issue.
Scientific Process and Social Issues in Biology Education | Garland E. Allen | Springer
It is empirical in that anything that stands in the right kind of causal relation with public trust will count as an objective feature of science. There is no way to tell a priori what these features might be. It is contextual in that there is at least the possibility that these features vary with time, place, discipline and other contextual elements. It may well be, for instance, that one or more of the three traditional understandings have once promoted trust in science, even if they no longer do so. It may also be that it is these features that promote trust in one science but not others.
And it may be that trust-making features vary with social and political circumstances—different features may be salient in different stages of development or between war and peace times and so on. There is no reason to think that sciences that represent the world from a perspective, in which non-epistemic values play important roles in scientific decision-making and in which personal elements affect outcomes cannot be trusted by the public. Scientific objectivity in the instrumentalist conception is thus both valuable, as gives us something worth pursuing, and attainable.
At the same time, arguably, instrumentalism about objectivity is more of a research program than an explication of the concept. Suppose we have a domain of science which, at a particular place and time, fares very well with respect to promoting public trust. How are we to tell which features of these scientific practices are responsible for the success? It is obviously impossible to run experiments. Just observing and comparing historical episodes is not likely to clinch results as there will always be numerous differences between any two domains, historical episodes and places.
Moreover, if objectivity is identified with features that promote trust in science, how do we prevent clever marketing from being a crucial feature of scientific objectivity? Or suppose that a science loses public trust say, as macro and financial economics did after the Financial Crisis after What might be effective strategies to regain it? Arguably, instrumentalism raises more questions than it answers. Yet, instrumentalism faces none of the obvious difficulties of the alternative views, so it might well be a worthwhile research project.
The challenge for proponents of traditional views of objectivity consists in showing how specific features of their favorite conception of objectivity e. The instrumentalist and the traditional research programs may therefore fruitfully complement each other. So is scientific objectivity desirable? Is it attainable? That, as we have seen, depends crucially on how the term is understood. We have looked in detail at three different conceptions of scientific objectivity: faithfulness to facts, value-freedom and freedom from personal biases.
In each case, there are at least some reasons to believe that either science cannot deliver full objectivity in this sense, or that it would not be a good thing to try to do so, or both. Does this mean we should give up the idea of objectivity in science? We have shown that it is hard to define scientific objectivity in terms of a view from nowhere and freedom from values and from personal bias. It is a lot harder to say anything positive about the matter. Perhaps it is related to a thorough critical attitude concerning claims and findings, as Popper thought.
Perhaps it is the fact that many voices are heard, equally respected and subjected to accepted standards, as Longino defends. Perhaps it is something else altogether. Perhaps it is a combination of several factors, including some that have been discussed in this article. However, one should not as yet throw out the baby with the bathwater. Like those who defend a particular explication of scientific objectivity, the critics struggle to explain what makes science objective, trustworthy and special.
For instance, our discussion of the value-free ideal VFI revealed that alternatives to the VFI are as least as problematic as the VFI itself, and that the VFI may, with all its inadequacies, still be a useful heuristic for fostering scientific integrity and objectivity. Whatever it is, it should come as no surprise that finding a positive characterization of what makes science objective is hard. If we knew an answer, we would have done no less than solve the problem of induction because we would know what procedures or forms of organization are responsible for the success of science.
Work on this problem is an ongoing project, and so is the quest for understanding scientific objectivity. Scientific Objectivity First published Mon Aug 25, Introduction: Product and Process Objectivity 2. Objectivity as Faithfulness to Facts 2. Objectivity as Freedom from Personal Biases 4.
Issues in the Special Sciences 5. Instrumentalism to the Rescue? Introduction: Product and Process Objectivity Objectivity is a value. Objectivity as Faithfulness to Facts The idea of this first conception of objectivity is that scientific claims are objective in so far as they faithfully describe facts about the world. Williams : Thus, a scientific account cast in the language of the absolute conception may not only be able to explain why a tree is as tall as it is but also why we see it in one way when viewed from one standpoint and in a different way when viewed from another.
Kuhn : That is, our own sense data are shaped and structured by a theoretical framework, and may be fundamentally distinct from the sense data of scientists working in another one. As Feyerabend puts it: our epistemic activities may have a decisive influence even upon the most solid piece of cosmological furniture—they make gods disappear and replace them by heaps of atom in empty space. For an epistemic community to achieve transformative criticism, there must be: avenues for criticism : criticism is an essential part of scientific institutions e.
This is the core idea of the Value-Free Ideal VFI : Scientists should strive to minimize the influence of contextual values on scientific reasoning, e. The next question is then whether the VFI is actually attainable. Reichenbach first made this distinction with respect to the epistemology of mathematics: the objective relation from the given entities to the solution, and the subjective way of finding it, are clearly separated for problems of a deductive character […] we must learn to make the same distinction for the problem of the inductive relation from facts to theories.
Related Scientific Process: Case Studies on Science in Social Context
Copyright 2019 - All Right Reserved