Monday, April 5, 2010
Design-based research
What about the research approaches of engineering? Prototyping.
Technology = Application of knowledge
Technology = Creation of knowledge
Technology = Transforming and intervening in the natural world to produce desired results
See handout
Selection skills
1. Data being measured (nominal "categorical", ordinal "ranked", interval "scaled")
2. Number and kind of groups involved
3. Groups are compared or related
SPSS practice
&
(3) Test for statistically significant difference between means (do the intervals overlap?):
SPSS, Analyze, Descriptive Statistics, Explore, Statistics, Confidence Interval
Dependent List = Scaled data variable
Factor List = Grouping variable
Graphing (SPSS, Graphs, Legacy Dialogs, Error Bar) Variable = Scaled data variable, X axis = Grouping variable
Test for statistically significant difference between means (does interval contain zero?) for (4) between groups:
SPSS, Analyze, Compare Means, Independent Samples T-test (Does the interval cross zero? If crosses zero, then no significant difference.)
Test for statistically significant difference between means (does interval contain zero?) for (5) repeated measures:
SPSS, Analyze, Compare Means, Paired Samples T-test (Does the interval cross zero? If crosses zero, then no significant difference.)
Friday, April 2, 2010
Confidence Intervals
Confidence interval is plus or minus the z score of alpha/2 multiplied by standard error (sigma/square root of n). You can substitute t for z and sigma for s, if you do not know sigma.
"95% confident that the interval includes the parameter." Avoid the "parameter falls between the interval."
Comparing Means With Confidence Intervals
Do the intervals overlap? If yes, no statistical difference. That's it. Replaces Ho testing for between groups and repeated measures.
Confidence Interval for proportions/percentages/pearson r is not tested in this course.
Confidence Interval applications:
1. Estimate population mean based on sample mean when sigma is known (z)
2. Estimate population mean based on sample mean when sigma is not known (t)
3. Test for statistically significant difference between means (do the intervals overlap?)
4. and 5. Test for statistically significant difference between means (does interval contain zero?) for (4) between groups and (5) repeated measures
Wednesday, March 31, 2010
Chapter 14 Confidence Intervals
Confidence intervals look like 78% (+ - 3%). The interval is 75% to 81%.
Monday, March 29, 2010
Non-parametric tests
Chi-square test of independence (two variables or groupings). The differences between groups are tested. You test the Ho and either reject or fail to reject Ho. No association between groups is called independence.
(SPSS, Analyze, Descriptive Statistics, Cross-tab, Statistics, Chi-square)
Chi-square test of goodness of fit (one variable or group). The differences between the group and your expectations (chance) are tested. You test the same way as the test of independence. Expectations of Ho are equal proporation across all choices (chance).
(SPSS, Analyze, Nonparametric Tests, Chi-square)
Chi-square test of significance of a proportion (compare 2 frequencies only like "yes" and "no"). Small case of goodness of fit test.
Chi-square homework set in blackboard.
Friday, March 26, 2010
Non-parametric tests (Chapter 13)
Parametric techniques when we have scaled data (interval or ratio) such as r, z, t, F.
Non-parametric techniques when we have non-scaled data (ranked or categorical) such as Chi-square or phi coefficient.
Chi-square test of independence (conditional probability):
This is the test of two variables (groups) that shows they are independent of each other.
Chi-square test of goodness-of-fit:
Compare proportions.
Monday, March 22, 2010
ANOVA and exam 2
Repeated measures output. If you get significance then do Post Hoc, which is a paired samples t-test with corrected p-value for multiple t-test. Bonforoni correction: 0.05 divided by the number of comparisons.
Between Groups:
Important output for Between Groups ANOVA. It follows the BG WG formulas.
Study guide in blackboard for exam 2.
Studying in this exam (7 kinds):
1-way ANOVA (2 kinds) - (1-way = 1 independent variable)
1. Between Groups (SPSS: Analyze, Compare Means, One-way ANOVA)
2. Repeated measures (SPSS: Analyze, General Linear Model, Repeated Measures)
T-test (3 kinds)
1. Single sample (SPSS: Analyze, Compare Means, One sample t-test)
2. Between Groups (SPSS: Analyze, Compare Means, Independent samples t-test)
3. Repeated measures (SPSS: Analyze, Compare Means, Paired-samples t-test)
Pearson r
Simple linear regression
Wednesday, March 17, 2010
ANOVA - Chapter 12
Things beyond this class:
- Factorial ANOVA designs
- Dunns procedure
Announced that even though the course schedule has the 1st draft of full report due, the draft is now due Apr 5.
ANOVA accounts for the variation within groups (WG) and between groups (BG).
Optimal case is low variation within groups (WG) and large variation between groups (BG) because it is said that you have controlled extraneous variables and are measuring what you want to measure.
Total Variance (TV) = BG + WG
F statistic = BG/WG
A significant F statistic does not tell you which means are significantly different from which means. It only tells you that at least one mean is significantly different from another.
Monday, March 15, 2010
T-test and ANOVA
Three ways to compare means of two groups (sample vs. population, sample vs. sample, pre vs. post)
pg 308-320 is not on the test. Scan it, but don't do z-test practice problems.
pg 339 #3 question (make sure you understand how to answer this.)
Data enter one column for age and one column for status (married or bachelor).
ANOVA
Chapter 12
T-test only accomodates comparing 2 means. ANOVA is used to compare more than 2 means. Multiple t-tests inflates the alpha level, so we use ANOVA, since a extreme sample would show up multiple times in the pairings. Sample 1 vs. Sample 2, Sample 1 vs. Sample 3, etc. If sample 1 is an outlier, each pairing might give you a type 1 error.
ANOVA is the workhorse of statistics...any research question you have, many would conform it to how to run an ANOVA. That is a little backwards, but it has been like that in the past.
We will study between-groups and repeated-measure ANOVAs.
ANOVA (analysis of variance)
When to use ANOVA:
1. More than two means compared
2. The group means should vary widely from grand mean (mean of means)
3. The groups' raw data does not vary widely. (Close clustering of scores)
Total variance = Variance + error or Between-groups variance + within-groups variance
Between-groups variance is the good stuff. (treatment + error)
Within-groups variance is the bad stuff (variance due to error only since each member of the group is exposed to the same treatment).
F statistic = BG/WG or (treatment + error)/error
F = 1 (means treatment had no effect)
Friday, March 12, 2010
Chapter 11
T-test (used to compare groups) - replaces z-test
T-test of the means of groups. Use the t-test when the population standard deviation is unknown (sigma).
Sample compared to population:
T = sample mean - population mean divided by standard error
df = degrees of freedom, typically n-1
Sample compared to sample:
T = sample1 mean - sample2 mean divided by standard error
df = n1 +n2 - 2
Pre-post comparison: (repeat measures format)
Page 335 for formula
Chapter 10
Causality
What is the cause of phenomena?
3 Conditions of internal validity:
1. X comes before Y (antecedence)
2. X and Y are in the same space and time (contiguity)
3. Z is explained away (necessary connection)
Experiments
Prediction and control…independent variable (we manipulate), dependent variable (we measure).
Pre-test -> Treatment -> Post-test
Pre-test -> Control -> Post-test
*Randomly assigning participants to treatment and control groups. This is not random sampling a population. Both groups are equivalent.
External validity: generalization from lab to real world.
Internal validity goes up (control goes up) then external validity goes down (generalizability to real life)
Between groups and repeat measure are the basic experimental design.
Time related effects are the huge critic against repeated measure cause. (pp 279-285).
Wednesday, March 10, 2010
Experiments and quasi-experiments
R=Randomization
O=Observation
X=Treatment
A=Analyze data
Experiments:
ROXOA
RO OA
R XOA
R OA
Threats to validity: R
Selection (messed up the random assignment)
Mortality (participants drop out of groups unequally) If most of the females drop out in one group, it throws off the experiments. Called mortality because it references the use of rats and rats dying at unequal rates.
Threats to validity: O
Testing: the testing sensitizes participants to treatment
Instrument: measurement devices are messed up)
Threats to validity: X
Experimenter bias (treat groups unequally)
Experiment diffusion (something contaminates the control group)
Threats to validity: O - O (multiple observations)
History (something affects participants outside of the experiement like TV show, breaking news, etc.)
Maturation (something within the participants changes)
Threats to validity: A
Statistical regression (extreme scores are not repeatable, tend to move toward the mean in second observations)
Statistical conclusion (improper or faulty analysis)
Quasi-experiments
Do not include randomization, so groups are not equivalent.
1. Nonequivalent control group: OXOA compared with O_OA
2. Simple interrupted time-series: OOOOOXOOOOOA (no random and no group compare)
3. Combined (Time series with nonequivalent control group): OOOXOOOA compared with OOO_OOOA.
Monday, March 8, 2010
SPSS
SPSS definitions:
p-value = Significance (SPSS)
pearson r = Pearson Correlation (bivariate SPSS)
Results section
"A significant correlation was observed between stress3 and coursgrad, r=-0.83, p=0.000."
Discussion section
"I interpret the correlation...." Correlation is for prediction, not causal. Avoid the impression that readers may have that you view it as causal.
r squared is the percentage of variance in y that is systematically varying with x. R-squared is the name of the game. The higher the better with the least number of independent variables.
y-intercept is the number in the constant row under Unstandardized Coeffificent B column (y = slope * x + y-intercept) (y=bx + a) See image above.
slope is the number under the constant row in the same area. (b**)
**If you standardize b, you get r.
The Ed Psych Data Directions in Blackboard is what we need to know.
Monday, March 1, 2010
Regression
Regression lines should not go beyond the original data points.
Error is regression is the difference between the average predicted Y and the actual Y.
(Y-Ybar). Observed minus expected. This is called standard error of the estimate, but it is the standard deviation using the Y and expected Y along (running) the regression line. Also known as coefficient of non-determination. (1-r squared).
Opposite of the error is the coefficient of determination (variability accounted for).
How much variance is error and how much is accounted for by a correlation with another variable?
Total variance = variance accounted for + error.
Variance accounted for is r squared (pearson r squared)
The connection between correlation and regression is r = a standardized slope of the regression line. r = b (s sub x / s sub y)
Friday, February 26, 2010
Correlation and Regression
Table B3 determines whether or not the observed pearson r is a "rare event" unlikely to have occured by chance.
A large sample size usually makes r values significant.
The formula and calculation for comparing r's will not be required on test.
Regression
Uses the classic equation for a line: y = mx + b, but the letters are different in stats. It is y=bx+a where b is the slope and a is the y intercept.
Slope = rise over run or y1-y2 divided by x1-x2
Prediction comes from graphing the line and predicting x, y coordinates on the line.
Data that can be described as a line is known as perfectly linear relationship.
Best-fitting line is known as the regression line.
Method of least-square creates the regression line (or best-fitting line): It is the line that minimizes the overall distance between the regression line and the data points.
Calculation is not required for test.
Wednesday, February 24, 2010
Correlation
Scale data - Pearson Product-Moment Correlation (aka Pearson Correlation)
Ordinal data - Spearman Rank-Order Correlation
Nominal data - Phi Coefficient
Pearson Correlation: Range is from 1 to -1. Closer to 1 or -1 the stronger the relationship. At 0, no linear relationship whatsoever. Scatter-graph that looks like a line is a strong relationship.
Correlation coefficient = r, r is a standard index from 1 to -1.
Important caveats about Pearson r:
1. Not all important or interesting relationships are linear. (Yerkes-Dodson Law)
2. Watch out for spurious correlations (counterfeit correlation)
A. Restricted range (see handout) - full range shows relationship where restricted range shows counterfeit correlation.
B. Combined groups: combining groups may off-set or wipe out a correlation that exists when the groups are not combined. Breaking out groups by demographics or gender or something helps avoid this problem.
C. Outliers: outliers through off calculations. Why is there an outlier? You have to explain the outliers.
Correlation does not equal causation, it equals a degree of covarying.
Correlaiton does not tell us:x -> y
y -> x
z -> x and y
coincedence
Pearson r formula is covariance divided by total variability
Monday, February 22, 2010
SPSS introduction
Transform, Recode into different variable - Change the data like gender as 1 and 2 could be changed to 3 and 4, or grades like everything above C is 1 and below is 2.
Transform, Compute variable - take several variables and calculate a new variable.
Analyze is where SPSS is powerful.
- Descriptive statistics
- ANOVA
- T-test
- General linear model
- Correlate
- Regression
- Nonparametric tests
- Scale
Friday, February 19, 2010
Hypothesis testing - probability
Directional: words like below or above or more or less are used.
Non-directional: words like difference or change or impact are used.
P score (:z-observed) aka observed score: Alpha is set by you like 0.05. P score is the probability of the observed score from your sample given that H0 (null hypothesis) is true.
"If your P score is less probable than alpha, you have a score to reject H0 (null hypothesis)".
Decision Errors
Type 1 error is false positive (errorneously rejecting H0) - the likelihood is alpha.
Type 2 error is false negative (errorneously failing to reject H0) - the likelihood is called beta. (Beta is not taught in this class)
As alpha decreases beta increases and visa versa.
Power: Probability that the test will lead to a reject H0 when H0 is actually false. You rejected H0 when you should reject H0.
Telescope example...type 2 error is a telescope that doesn't have enough power to see the asteriod that exists. If it has enough power, then you correctly reject H0.
Tuesday, February 16, 2010
Hypothesis testing - Standard error of the means
Raw data -> Summarized, Organized, Simplified (Descriptive statistics: s, x-bar, s-squared) -> Sample to population inferences (Inferential statistics: p, z, t, F, q)
Hypothesis testing
1. Simple random sampling: used for statistical inference, where populations are inaccessible, and are often more accurate. All units in the population have an equal chance of being selected.
2. Proportional Stratified Random Sample: sample maps exactly onto the population in terms of proportions of sub-groups (e.g. population has 10% seniors and sample has 10% seniors)
3. "Errors" in sampling (sampling error and non-sampling error) must be dealt with. Samples and population don't match-up. Non-sampling errors include question text and framing that creates confusion. Other things like cultural issues can cause non-sampling error.
Sampling Distributions
How do you detect how much error (sampling error) is in the sample? Use a standard-deviation-like calculation (spread of scores with respect to the mean).
By selecting multiple samples and calculating the means of those samples, then using the means in the place of raw scores and calculate the standard deviation of the means-like statsitic called the standard error of the means (s-sub-x-bar).
Standard error of the means = sample standard deviation divided by the square root of the number of observations in the sample. (s / sqrt n)
or theoretical (sigma-sub-x-bar = sigma / sqrt N)
A sampling (sample of means) distribution is normally distributed when it is drawn from a normally distributed population or the size of the samples is reasonably large (at least 30).
Friday, February 12, 2010
Paper writing tips
- Begin with the end in mind (goals of the writing)
- Flow with the end in mind (each sentence and paragraph has "end" purpose)
- Look for gaps or open space between sentences and paragraph where the connections are weak (readers willingness to move on)
- Claims must be supported (claims are supported by logic both yours and others)
- Abstracts are the lean and mean of here's what we did and here's what we got. No lit review stuff in abstracts.
- Don't forget about the bridge from the literature and your hypotheses.
Writing abstracts:
- Opening (one sentence)
- Purpose of the study (include hypotheses)
- Research design/method description
- Results (brief description)
- Conclusion (one sentence)
* Don't put any sentence in the abstract that could be cut and pasted into another abstract.
Probability
Plotting z scores is important for conceptually understanding z-score relationships.
Wednesday, February 10, 2010
Relative standing
Percentiles or ordinal position or rank: is not sensitive to the variability of raw scores, just ranking.
Standard scores (z-scores): Statistical approach to standardizing scores in a standard scale of measurement. The relative standing of one score can be compared to another, even when they are measured on different scales (GMAT, CPA, GRE, etc.). A numerical index of relative standing expressed in standard deviation units. The mean has a z-score of 0 standard deviation units.
Calculated by (score - population mean)/population standard deviation also known as the distance from the mean expressed in standard deviation units.
Formula structure is "observed" minus "expected" divided by "error".
Was Wayne Gretsky a better scorer than Michael Jordan? (Compare z-scores)
If you don't know the population standard deviation, generally you don't calculate z-scores.
Z-distribution (Appendix B, pg B1-B5)
Table calculates the (a) area between the Z score and Mean and (b) area beyond the Z score in a normal distribution.
The "area beyond the Z score" is the probability score.
Monday, February 8, 2010
Distributions and variability (standard deviation)
Distributions:
Normal distribution is where the mean, median, and mode are identical; is symmetrical; and tails never touch x-axis. Characteristics in nature are thought to be normally distributed. Parametric statistics are appropriate.
Non-Normal distributions is not symmetical; is either positively skewed (tailed) or negatively skewed; and is peaked (leptokurtic or less variability) or flat (platykurtic or more variability). Non-parametric statistics are appropriate.
Population vs. Sample (subset of population)
Sample statistic vs. population parameter (like "x bar" and "mew" for mean). If you know parameters, you do not need statistics.
Variability:
Variability is the spread or dispersion of scores in a distribution.
1. Range is highest score value minus lowest score value. Range is not sensitive to inside variability. Two sets of data can have the same range, but very different standard deviations.
2. Variance is an index that considers all scores (including inside variability). Sample is read as "s squared" and population is read as "sigma squared". Variance is the average distance of all the scores from the mean of the scores. Sum of (x - mean) squared/(n - 1) = s squared. Variance squares the distance from the mean because without squaring it, the answer would be zero.
3. Standard deviation is the square root of variance. Standard deviation is an index of variability that is expressed in the original counting units (variance is expressed in squared counting units). It is known as the "spread-out-ness". It is read as "s" for samples and "sigma" for population.
(On exam: A calculation of standard deviation will be required by hand)
4. Median Absolute Deviation is used for ordinal (ranked) data and skewed data (see pg. 103).
*If you have scaled (interval) data, then use mean and standard deviation. If you have ranked (ordinal) data, then use median and median absolute deviation. If you have nominal data, then use mode and a frequency comparison. (See table 5.5 page 107).
Friday, February 5, 2010
Centrality and spread
Practice the summation notation.
Order of operations:
- Parathesis
- Exponents/Square roots
- Multiple/Divide
- Add/Subtract
All the statistics for our purposes is based on two concepts: centrality and spread.
Centrality: (aka Central tendency)
- Mode (used for categorical data from nominal data or frequency data) - most frequently occuring value. (bi-modal, multi-modal, amodal - no mode, constant scores)
- Median (used for ordinal data or ranked data) - know as 50th percentile or half the scores above and half below. For even number of cases, take the mean of the two middle scores.
- Mean (used for scale data or equal interval data) - sum of scores / number of cases
Properties of the mean:
- Sum of the deviation about the mean: sum of (x-mean) = 0
- Square of sum of (x - mean) is a minimum. The actual mean will bring this equation to the lowest answer. If you substitute the mean with any other number, the equation will give you a greater answer.
- Grand mean is not a mean of means. It is calculated based on weighting (due to sample size of each sample mean) Grand mean = (n1*mean1 + n2*mean1)/n1+n2
Friday, January 29, 2010
Variables and measurement continued
Rychlak's Logical Learning Theory - you can use empirical method to test formal and final. Opposed deterministic theory that humans are machine-like. He used data against the data gatherers. His problem was he was swimming up stream against tradition.
Measurement
What is it? Assigning values to properties or characteristics based on a set of rules.
Thorndike: If it exists, it can be measured. (Viewpoints tend to reveal and conceal things.)
Operationism: Through the use of operational definitions, anything can be measured including things like intelligence or competence. Operational definitions are precise procedures to create independent and dependent variables. Operationizing is finding a measurable variable that is related to the variables that can't be measured.
Textbook pg 63 "Odd twist of logic...": Meaning of a statement IS its observable or measurable qualities. Love is the number of hugs. Intelligence is the IQ test results.
Research proposals
- He is not opposed to white papers or internal reports, but the purpose of this class is an introduction to scholarly writing. White papers are typically not journal articles. The project for the class is scholarly, so white papers may have to be written twice (once for your employer and one for class).
- Take part in a scholarly conversation.
- Take the assignment seriously, but know you won't be an expert or even close after taking this class.
- Relevance needs to be addressed in the final paper. Why should the reader care?
- Research needs to be original.
- Watch out for horse-race research (pitting two methods against each other). One curriculum in one class and another in another class, which one wins. These are flawed.
- Use numbers to make sense of the phenomenon, not to determine causal relationships.
- Make sure you can get data in a short time and for this class you need to use statistics and interprete them and make inferences.
Wednesday, January 27, 2010
Library Instruction
ProQuest is a database that stores all of the published dissertations. Find a similar topic to see what their literature review says.
RefWorks is free through the library. This will easily store references and create bibliographies using a write and cite download for Word.
ERIC tools to use:
- Thesaurus (helps you find the right search terms...don't waste time on bad search terms)
- Search history (keep your search histories - search box text)
Indicators of scholarliness:
- Peer reviewed.
- Who publishes it?
- Covers the topic you are searching for.
How far back should you go in time?
- Include enough history to nod to the foundations
- Stay modern to show currentness
Identifying gaps in the research:
- Look for places where it says, "research says or studies show", look at the parameters and then look at a different body or tweek the parameters.
- Look for un-answered questions - "this study failed to show...future studies should..."
- Reading along and you have a question on your own that didn't get answered. Article leaves you with questions.
- Any rejections by the author that are maybe shouldn't be rejected yet.
Monday, January 25, 2010
Chapter 3 and 4 - Variables and Measurement
Variable - changeable characteristic
- Qualitative: different in category or kind or grouping
- Quantitative: different in amount (how fast, weight, time elapse, measured in numbers, etc.) (a) Continuous (divisible) and (b) Discrete (indivisible)
Constant - non-changeable characteristic
Causation - "Billard ball" effect. Variables cause other variables.
- Antecedence (Cause A preceeds Effect B)
- Systematic covariation or Contiguity (Cause is together with the effect in time and space)
- Eliminate other possible causes (No other way that this cause could have happened)
Do all three experimentally, the best you can, and you have met the empirical science standards and you can explain a phenomenon that will hold up against critics.
IV (cause) -> DV (Effect) when:
- IV preceeds DV
- IV and DV are together in time and space
- Other possible causes are eliminated
Science is philosophical:
- Science rests on the concept of causality, but causation is a philosophical conception (non-observable). A cause is not observable.
- Aristotle's 4 causes: (a) material or substance, (b) efficient or sequence of events across time, (c) formal or essence/pattern of something, (d) final or goal/purpose
Friday, January 22, 2010
Publishing
- Submit manuscript
- Editor acknowledges receipt
- Reviewers accept or reject (or accept and ask for revisions)
- Author signs forms and returns to editor
- Production
- Produced manuscript sent to author for review
- Author reviews and sends back comments
- Typeset manuscript sent to author for review
- Author reviews and sends back comments
- Publication
Critical thinking and literature reviews
Critical thinking:
- Get in touch with your theoretical background (you may not be able to examine all of your values and assumptions). Do your best. Some of your assumptions may not have a examinable "why".
- Support your arguments. An assertion is different from an argument. An assertion is an opinion that is not supported by evidence and a flow of logic. Burden of proof is on the author.
- Clear continuity paragraph to paragraph.
- Methodological details need to be examined.
- Conclusions need to be examined. (Warranted, forced, viable, etc.)
Literature Review
Purpose of the literature review is to summarize and analyze literature using an narrative. It also helps establish relevance to the study being written. The theme should be progressive or advancing knowledge.
- Identify the major topics and phenomena of interest.
- Identify the positions, points of agreement and disagreement.
- Identify what research suggests and the gaps.
- Critical analysis and telling a story with a voice of your own. Draw a conclusion. Describe trends as you see it.
"Missing link" paragraph is the bridge from the literature review to the research hypotheses.
Find a literature review article on your topic to start.
Other things to consider:
- Importance of history.
- Capitalize on the tension of disagreements currently in the literature.
- Literature review needs to be a scholarly guided argument (clears some space for your study).
Appendix D - APA writing
Clarity in professional writing typically means proper words and simple sentence structure.
Brevity in professional writing typically means saying exactly what one needs to say and nothing superfluous. Cut the fat off the steak.
Common problems:
- Sexist words like man instead of people or he instead of they
- Data is plural. The data are...not the data is.
- Amount vs. number...The number of participants...not the amount of participants
- i.e. vs. e.g. i.e. = "that is", e.g. = "for example"
- Inanimate objects have human characteristics like "the experiment concluded that..."
- Use words to express numbers below 10 except time, dates, and ages.
- If a sentence begins with a number, it is written as a word.
- Title page
- Abstract (typically no more than 120 words)
- Introduction
- Method
- Results
- Discussion
- References
- Appendices (Rarely used)
- Author notes (Contains acknowledgment of financial support)
- Footnotes (Rarely used)
- Tables
- Figure Captions
- Figures
Wednesday, January 20, 2010
Literature review
Hypotheses flows from literature review. Guided tour through the literature leading toward patterns and themes in the literature. Then bridge the literature review to your study questions using about a paragraph "the missing paragraph". Flow is huge.
Which comes first: Literature review or research question? Both rely on the other.
Look at the review articles to see good literature reviews of an area. Review articles don't have original data, they just summarize the literature.
Friday, January 15, 2010
Ethics and other things
Deception: (1) You must reveal the deception at the end of the study. (2) You must make the case that the data can not be obtained in a non-deceptive way. IRB looks at these two points carefully.
Research Project:
Clarifying the meaning of a phenomenon: Good place to start your search for a topic and questions.
Research questions:
Think widely and creatively about the problems around you. Bring an informed mind to the task. Practice is what makes good research questions. Interesting (to you) is the best place to look for questions. Intellectual risk tasking (not studying what everyone is studying) can be rewarding.
Daniel N. Robinson "Paradigms and the myth of framework" - Progress and science (informed imagination applied to a problem of genuine consequence): Not (habitual application of formulaic mode of inquiry to a set of quasi-problems). Invent what you need to invent to solve the research problems. http://tap.sagepub.com/cgi/content/abstract/10/1/39
Kurt Danziger "Constructing the subject" and "Naming the mind": Two books that critique of research
Wednesday, January 13, 2010
Ethics discussion notes
- Inconvenience
- Physical
- Psychological
- Social
- Economic
- Legal
Research should be thought of in terms of these risks and likelihood, severity, duration, reversibility, and detection.
Three major categories of research submitted to the IRB:
- Exempt: Normal practices, educational tests, observe public in public places (bathrooms not included), collection of existing data, taste tests
- Expedited: Minimal risk of participant identidy becoming public, medical devices, blood samples, collection of images, voice
- Full board: A probable risk of harm, involves deception
IRB is not necessary for pilot studies (un-publishable) and internal learning (learning about your teaching by asking questions of your students).
Tuesday, January 12, 2010
Chapter 2
The responsibility of ethical research is ultimately the researchers.
Institutional Review Board (IRB) is required by federal regulations. Any research involving human subjects requires the research proposal to be approved by the IRB.
The rights of the subject (right to not be harmed or changed in a negative way) are ethically balanced against the rights of the researcher (right to ask questions and seek knowledge) in a well formed research proposal.
APA's six general ethical principles
- Competence: researcher must be competent in techniques and take precautions to protect subjects.
- Integrity: Fair and honest...no misleading or deceptive statements
- Professional and scientific responsibility: Conduct should not reduce public trust or colleagues' reputations
- Respect for peoples' rights and dignity: Privacy, autonomy, etc.
- Concern for the welfare of others: Minimize harm to participants
- Social responsibility: concern for society
Participants should be made aware of all possible risks. (Informed consent)
Participation in research should be voluntary and participants should be free to withdraw at any time with penalty.
No informed consent is necessary if the participants remain anonymous and behaviors observed are naturally occurring even if the research was not being conducted and said behavior is not embarrassing like observing which playground equipment is most popular.
Monday, January 11, 2010
Objectivism vs. Relativism
Cartesian anxiety--Thinking of Decartes (longing for certainty, but doubting its existence)
For Decartes: skepticism is a tool, not an end. You don't reside in skepticism. Knowledge is held at a high (perhaps unachievable) level of certainty. This cause a slide to relativism (or no truth at that standard). Intellectual chaos is not place to live either.
David Paulsen: The God of Abraham, Issac, and William James - Article about nature of God.
John Dewey: Pragmitism and functional psychology is an alternative to objectivism and relativism.
Pragmitism: If something works satisfactory, then it is true. The problem with pragmitism is things that work may not be appropriate (torture in school).
Hermeneutics: truth unfolds through experience. It is always changing.
Thursday, January 7, 2010
Yanchar paper notes
Uncertainty principle made it impossible to know truth through observation because the experiments actually helped invent reality.
Philosophy of science: Verification (verify a theory in terms of proof positive) and falsification (prove the theory is not true) were both fallacious. Verification is impossible because there are always rival theories that could explain the data. Falsification requires the revision of theories proven false and the re-testing. Verisimilitude (process of revising and re-testing theories to improve them) can never put theories beyond criticism and can never fully-prove them. Science can therefore never reach truth.
Why study science?
1. To be able to debate in the language of sicence
2. Scientific reports, dispite their short-comings, are still in demand
3. The pursuit of science (understand phenomena, discovering relationships, seeking better treatments, etc.) is a valuable pursuit
4. We can examine the strengths and weaknesses of the approach
Chapter 1 Notes
1. Intuition: gut feeling, not very reliable, science places little credence
2. Authority approach: Reliance on someone perceived to be knowledgable, perceived expertise, questionable value
3. Rational-inductive approach: reasoned answers, logical, limited value due to faulty logic reasoning and memory skills
4. Scientific approach: systematic observation and recording of events (Also referred to as Empirical approach), laboratories
The book focusses on three research methods under the Empirical approach:
1. Experimental method: tries to determine causes using carefully controlled conditions
2. Correlational method: looking for reasonably accurate predicitions, but not cause and effect
3. Quasi-experimental: looks like experimental, but can't determine causality
Reliability: consistant measurements of the same thing repeated
Validity: measurement actually measures what it claims to measure
Objective measures: based on direct use of sensory information
Subjective measures: based on reactions of observer
Bias for positive instances: paying more attention to events that support our preconceived expctations and often ignoring any negative instances
Rival explanations: alternative hypotheses that could give rise to the same data
Operational definitions: precise definition used in the procedures
Replication: observations can be repeated with the same results
Internal validity: study allows us to answer the research question
External validity: findings can be generalized broadly
Wednesday, January 6, 2010
Philosophy of science
Problem sets are not unlike the exam questions.
Descriptive statistics is taking lots of data and organizing and summarizing data sets.
Inferential statistics is making conclusions or inference about the data.
Page 4. Doing research is discovering facts...How can we know a fact is a fact?
- Fact = Objective knowledge that comes from systematic analysis (bias and all other contaminants have been removed or bracketed) This is a 19th century concept, but this has been shown to be out-dated.
- Nothing can be proved and nothing can be disproved.
- Methods are placed in the spotlight to be set as the warrant for the conclusions, but methods may have assumptions that are not correct.
Method -> Truth (traditional)
Background -> Questions/Hypothesis -> Plan/Methods -> Products (Alternative)
Empirical method is knowing truth through sensory observation. Assumption is that truth exists in sensory observation aka physical universe. Illusions are one way this method fails.
Monday, January 4, 2010
First day stuff
http://docutek.lib.byu.edu/eres/default.aspx
Go to the Library website. (yan550)
Read the first chapter of Moby Dick for writing style.
http://www.americanliterature.com/Melville/MobyDickorTheWhale/2.html
Arthur Henry King: English Retoric Expert...joined the church. "A Man Who Speaks To Our Time From Eternity"
http://www.lds.org/ldsorg/v/index.jsp?hideNav=1&locale=0&sourceId=82bd27cd3f37b010VgnVCM1000004d82620a____&vgnextoid=2354fccf2b7db010VgnVCM1000004d82620aRCRD
Research project needs to push forward the frontiers of knowledge and not appear to be an executive summary for pay work.