Wednesday, March 6, 2019
Research Knowledge and Assessment
This essay explores how these queries whitethorn be abstractized, described, valuated, and explained by investigative methods. Philosophy of look Quantitative scientific query relies on information taken from existential methods based on observation and experience (Myers & Hanson, 2002 Stanchion & Stanchion, 2003). These systematic empirical methods fanny be used as inferential mathematical tools for evaluating a have from a commonwealth. Consequently, the empirical calculations of phenomena in a sample may be applied to an entire nation from which the sample was derived (Ho, 201 0, p. ). look for Terminologies Certain name in interrogation connote philosophical political program of attackes to beating ND evaluating information. Through the scientific process, investigate studies begin by developing questions or hypotheses, then collect selective information to help answer the questions or test the hypotheses. Research information argon smooth, take apartd, and interpreted to reach conclusions (Ladino, Spaulding, & Vogel, 201 0, p. 12). However, qualitative and quantifiable studies have similarities and dissimilarities in the scientific process due to the different cognitive snugglees in look into designs. soft studies utilize inductive reasoning turn decimal studies apply deductive logical system (p. 10). Figure 1 illustrates the specials, similarities, and differences of these concepts in qualitative and three-figure research paradigms. The scientific method, illustrated in figure 2, acquires and assesses acquaintance through observation and experience (Drew, Yardman, & Hose, 2008). The philosophy Of Positivism utilizes aspects of the scientific method in complaisant research.Positivist researchers view that only what is observed poop be evaluated in an object manner. This instrument that only observable behavior can be calculated without assure to motives, perspectives, or feelings ( favorable Research Methods, 2006). Con versely, post positivist philosophy does not take that Objectivity is infallible because knowledge is develop through social constructs and this knowledge cannot be divorced from personal perceptions which determine the legitimacy of wisdom (Ryan, 2006, p. 16).The front statements suggest that post-positivists believe deductions from observations may be relative and inexact (p. 20). This l block offs credit to subjectivity in research evaluations (Ratter, 2002). Objectivity in Objectivity can be described as a mental state in which personal biases, preferences, and perspectives of researchers do not contaminate the election and epitome of data (Sociology Guide, 2014). Objectivity is paramount in ensuring the veracity of a ingest. However, in social and commandal studies, objectivity presupposes a type of reality (Ratter, 2002).If that reality is created by the researcher or observer, then it may be more inherent than objective (p. 3). These ideas exemplify the challenges fa ced by those in qualitative or mixed-methods studies who moldiness judge the depth or the breadth and depth of research findings, respectively (Walden University, n. D. ). Though quantitative research may appear objective through the use of thematic calculations, subjectivity may occur in deciding what data ar to be measured and the types of measuring instruments to be employed (slash, 2003).Philosophical Developments in Research Scientific pragmatism is a quantitative approach to research in which numerical formulas are used to analyze data, and these data are used to symbolize constructs and variables (Ladino, Spaulding, & Vogel, 2010). Positivists utilize the tenets of scientific realism because they feel that the social and psychological world can be evaluated mathematically in the same way that quantitative research explains phenomena in the innate(p) world Social constructivism states that phenomena must be understood (P. 3). As multiform wholes and researchers must under stand reality through the perspectives of the participants in a hear. Social constructivism advocates hypotheses that are created to achieve meaning through multiple realities formed by diverse human perceptions in a social world. Social constructivism is usually employed in ethnographers and other types of social research. Advocacy and liberating frameworks excessively get a multiplicity of realities derived from social, economic, cultural, and political milieus.This philosophy aims research that advocates freedom from burdensomeness and is a common framework for education research studies involving minorities or socially oppressed groups of people (Fire, 1970). Pragmatism is not focused on delineate a real or socially constructed reality, but seeks practical answers to predicate correct practices and designs (Ladino, Spaulding, & Vogel, 201 0, p. 16). Pragmatists frequently use a mixed-methods approach to research for analyzing quantitative and qualitative data. Case studi es utilize the methods of pragmatism (p. 60). Conceptual and Theoretical Frameworks A framework can be created through Concepts or theories (Ladino, Spaulding, & Vogel, 2010, p. 13). A conceptual framework commemorates ideas or variables in a cogent and sequential manner, whereas a theoretical framework focuses on identifying the possible relationships among the ideas or concepts and develops theories for these relationships (Niagara, 2012). These theories provide a foundation for the beginnings of an probe and help maintain a focus for the military commission of a consume.A conceptual framework can also be define as a structure that describes the natural progression of a phenomenon through a theoretical framework that gives an explanation of how some constituents of the phenomenon may be related (Camp, 2001). In summary, a conceptual framework may gain concepts Of a mull over but it does not explain the relationships among the ideas or variables, whereas a theoretical framew ork can explain the associations among variables and how these associations relate to the research investigation (Science, n. D. ).Core Concepts of Research Design The research question is the basis for the research study and should include ethical guidelines (Ladino, Spaulding, & Vogel, 201 0, p. 388). It identifies dependent and independent variables in causative-comparative search and it targets variables that are expected to be related in correlation studies (up. 388-389). In quantitative studies, the research question is clarified by the hypothesis which is a declaratory statement or tentative position of the identified problem (Drew, Yardman, & Hose, 2008, p. 78). tangential quantitative investigations, the research questions in qualitative studies focus more on processes than on outcomes (p. 389). formerly the research question has been refined to a specific idea, then the statement Of purpose for the study can be expressed in clear and concise terms (Ladino, Spaulding, & Vogel, 2010, p. 89). The specificity of the research question and the evident purpose of the study are derivatives of the literature review which mainly focuses on primary, peer-reviewed articles related to the research question.Population and Sample Inferential statistics utilizes a subset from a population called a sample. Research results derived from the sample may be generalized to the population from which it was derived. However, in order for a study to produce accurate results and conclusions from a sample, it is important to differentiate between a theoretical population and an loving population Social Research Methods, 2006). The theoretical population should possess clear(p) characteristics related to the variables to be studied in the sample.An accessible population may be available for a study, but if its traits are not special within the sample it produces, the accuracy of the research is comport used (Expellable, 2009). Variables and Research If endings A varia ble is an object or entity that has different quantitative or qualitative values depending on the circumstance in a study (Ho, 2010, p. 127). In educational research, a variable can also be define as a measurable hypothetical concept (construct) that has been developed from a theoretical framework (Ladino, Spaulding, & Vogel, 2010, p. 3). When these variables are translated into data, the findings can be inform quantitatively, qualitatively, or quantitatively and qualitatively. Quantitative findings are numerical in personality and can be reported through Pearson-product moment correlations, multiple-regression analysis, t-test, chi-square, and other tests (p. 305). Qualitative findings may be reported through the use of triangulation techniques, coding, themes, and other procedures (up. 189-193).Assumptions, Limitations, and De cut backations Assumptions are constituents of a study which may not be under the bind of the researcher, but their disappearance in a study would make it irrelevant (Simon, 2011). Limitations are uncontrollable, potential weaknesses in a study, whereas delimitations are controllable characteristics that limit the scope and define the boundaries of a study (p. 2). This is why these three factors must be considered when research is conducted.Validity and Reliability Validity describes the accuracy and appropriateness of measures firearm dependability refers to the consonance of the mensurations (Ladino, Spaulding, & Vogel, 2010). In quantitative research, validity can be defined in terms of a construct which determines the type of data to be collected and the way in which the information is to be pull together (Winner & Braun, 1998). Validity in qualitative research was defined by Slashing (2003) as quality, rigor and trustworthiness (p. 02). The internal validity of a study can be affected by observations, option of informants for maximum variability, selection Of participants, and improper or misguided conclusions, whereas e xternal validity can be influenced by types of selection procedures, kinds of settings n which experiments are conducted, historical consequences from the lives of participants, and the variations in the meanings of constructs crosswise cartridge clip, environments, and populations (Michael, n. D. ).Reliability can be illustrated through consistent results after repeated evaluations show a continuous stability of measurements for a given period of time (Kirk & Miller, 1986). Reliability has been defined by Cope (2000) as The extent to which results are consistent over time and accurately represent the total population under study If the results of a study can be reproduced under a sensual methodology, then the research instrument is also considered to be reliable. (p. 1). However, Slashing (2003) cautions that a research instrument which measures consistently may not be measuring accurately.Hence, these inaccuracies of measurement make the research instrument invalid and controve rt the internal consistency and reliability of the research. Internal reliability can be affected by inference descriptors, a researchers selections of data, and the interpretations of the data by the researcher (Bloom, n. D. ). External reliability can be influenced by situational contexts that effect the information retrieved from participants, data collection, analysis methodology, and constructs (Slashing, 2003).Other Approaches to Unlike research investigations, program evaluations are critiqued regarding their immediate impact on what was observed and studied (Ladino, Spaulding, & Vogel, 2010). A program can be defined as a group of minute activities with measurable objectives (p. 363). The purpose of evaluating a program is to make a decision on a guide of action, whereas a research study provides information about a point topic or practice. Program valuations use constructive and summarize processes. These processes involve collecting information while the program occurs and measuring results at the end of the program to determine Owe those outcomes related to the overall Program and its success. (p. 366). Once these processes have been deduced, the findings can be used to reform education in that location are evaluation imitates that can be applied through practices. These formative and summarize approaches. All models of evaluation contribute to the development of the evaluation plan, capacity, data collection, data, analysis, and reporting procedures of the study. The most common model for program evaluation is the objective-based approach which assesses the overall purpose of the program and defines the type of information to be collected for evaluation.This approach also utilizes benchmarks or quantitative goals that participants are expected to obtain to ensure the success of the program. Among other program evaluation templates, the logic model measures progress at each phase of the curriculum while operate on the assumption that a rat ional sequence Of events must hazard in order to produce the final results of the program (p. 373). These sequences of events begin with resources or inputs which create actions or activities that lead to changes in the participants (p. 374).These changes or outcomes verify the efficacy or inefficacy of the program. In other words, the logic approach is a picture of how the program works through the theories and assumptions underlying the program (W. K. Kellogg Foundation, 2004). The logic model is commonly used for program evaluations in health education because it can illustrate the infrastructure of a program model while integrating the activities of the clinical educators and patients (Centers or Diseases Control and Prevention, 1 999) A detailed logic model can Threaten claims of causality and be a basis for estimating the programs effect on endpoints that are not directly measured but are linked in a causal mountain range supported by prior research Logic models can be creat ed to display a program at different levels of detail, from different perspectives, or for different audiences. (p. 9). It is imperative in health education to identify causal relationships among variables of patient care and clinical erudition paradigms. This is why the logic approach is such a good choice for evaluating these types of programs.Program evaluations possess benefits and shortcomings. One service of program evaluations is the immediate application of the information to a setting or environment for implementing improvements and other efficacious changes. Examples of disadvantages in program evaluations include the escape of available assets for improving program deficiencies identified through formative processes and the subjectivity of an internal evaluator who may have preconceived ideas about what the program outcomes should be. The focus of effective education is action (Spencer, n. . ). follow out research in education has been scribed as research accomplished by teachers to provide insights for themselves (Mills, 201 1). It is also a way for teachers to work collaboratively with each other with education administrators, and with stakeholders to improve classroom instruction and the learning potential of students (C. A. R. Madison Metropolitan cultivate District, 2010). The primary purpose of action research is to change and improve educational environments and outcomes (Ladino, Spaulding, & Vogel, 2010).The stages in conducting action research are sequential and cyclical (Classroom Action Research, 2012). These steps are illustrated in Figure 3. The diagram in the typification implies important ideas regarding the structure of action research. This Structure should include ways to intelligibly define an issue, to challenge the assumptions and views of the researcher conducting the study, to develop a concise plan for data collection, to encourage collaboration between the researcher and peers, and to provide try for practice improvem ent (Ladino, Spaulding, & Vogel, 2010).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.