Wednesday 7 June 2017

Reading Notes: Statistics 101 (Part 4)

Data Analysis

Two ways:
- Looking at the data graphically to see what the general trends in the data are, and - Fitting statistical models to the data.

Frequency distribution: a graph plotting values of observations on the horizontal axis, and the frequency with which each value occurs in the data set on the vertical axis (a.k.a. histogram).

Tuesday 6 June 2017

Reading Notes: Statistics 101 (Part 3)

Data collection methods in experiment research
1. to manipulate the independent variable using different entities.
== a between-groups, between-subjects, or independent design.
2. to manipulate the independent variable using the same entities.
== this means that giving a group of students positive reinforcement for a few weeks and test their statistical abilities and then begin to give this same group punishment for a few weeks before testing them again, and then finally give them no motivator and test them for a third time.
== a within-subject or repeated-measures design.

Data collection method determines the type of test that is used to analyse the data.

Andy Field:
The reason why some people think that certain statistical tests allow causal inferences is that historically certain tests (e.g., ANOVA, t-tests, etc.) have been used to analyse experimental research, whereas others (e.g., regression, correlation) have been used to analyse correlational research (Cronbach, 1957)...these statistical procedures are, in fact, mathematically identical.

Two sources of variation:
Systematic variation: This variation is due to the experimenter doing something in one condition but not in the other condition.
Unsystematic variation: This variation results from random factors that exist between the experimental conditions (such as natural differences in ability, the time of day, etc.).

In a repeated-measures design, differences between two conditions can be caused by only two things:
(1) the manipulation that was carried out on the participants, or
(2) any other factor that might affect the way in which an entity performs from one time to the next.
== The latter factor is likely to be fairly minor compared to the influence of the experimental manipulation.

In an independent design, differences between the two conditions can also be caused by one of two things:
(1) the manipulation that was carried out on the participants, or
(2) differences between the characteristics of the entities allocated to each of the groups.
== The latter factor in this instance is likely to create considerable random variation both within each condition and between them.

When we look at the effect of our experimental manipulation, it is always against a background of ‘noise’ caused by random, uncontrollable differences between our conditions.

In a repeated-measures design this ‘noise’ is kept to a minimum and so the effect of the experiment is more likely to show up.

This means that, other things being equal, repeated-measures designs have more power to detect effects than independent designs.

The two most important sources of systematic variation in repeated-measures design are:
Practice effects: Participants may perform differently in the second condition because of familiarity with the experimental situation and/or the measures being used.

Boredom effects: Participants may perform differently in the second condition because they are tired or bored from having completed the first condition.

Randomization: the process of doing things in an unsystematic or random way. In the context of experimental research the word usually applies to the random assignment of participants to different treatment conditions.

Reading Notes: Statistics 101 (Part 2)

Source: Andy Field
Correlational or cross-sectional research: observe what naturally goes on in the world without directly interfering with it.
- by either taking a snapshot of many variables at a single point in time, or
- by measuring variables repeatedly at different time points (known as longitudinal research).
== provides a very natural view of the question we’re researching because we are not influencing what happens and the measures of the variables should not be biased by the researcher being there (an important aspect of ecological validity).
== tells us nothing about the causal influence of variables.
- Variables are often measured simultaneously.
- The first problem with doing this is that it provides no information about the contiguity between different variables.
- The second problem with correlational evidence: the tertium quid (‘a third person or thing of indeterminate character’).
== E.g., a correlation has been found between having breast implants and suicide (Koot, Peeters, Granath, Grobbee, & Nyren, 2003).
== However, it is unlikely that having breast implants causes you to commit suicide – presumably, there is an external factor (or factors) that causes both; for example, low self-esteem might lead you to have breast implants and also attempt suicide.
== These extraneous factors are sometimes called confounding variables or confounds for short.

Experimental research: manipulate one variable to see its effect on another.
- Even when the cause–effect relationship is not explicitly stated, most research questions can be broken down into a proposed cause and a proposed outcome.
- Both the cause and the outcome are variables.
- The key to answering the research question is to uncover how the proposed cause and the proposed outcome relate to each other.

David Hume said that to infer cause and effect:
(1) cause and effect must occur close together in time (contiguity);
(2) the cause must occur before an effect does; and
(3) the effect should never occur without the presence of the cause.

- These conditions imply that causality can be inferred through corroborating evidence: cause is equated to high degrees of correlation between contiguous events.

- The shortcomings of Hume’s criteria led John Stuart Mill (1865) to add a further criterion: that all other explanations of the cause–effect relationship be ruled out.
== Mill proposed that, to rule out confounding variables, an effect should be present when the cause is present and that when the cause is absent the effect should be absent also.
== Mill’s ideas can be summed up by saying that the only way to infer causality is through comparison of two controlled situations: one in which the cause is present and one in which the cause is absent.

- This is what experimental methods strive to do: to provide a comparison of situations (usually called treatments or conditions) in which the proposed cause is present or absent.
- Example: the effect of motivators on learning about statistics. Randomly split some students into three different groups in which teaching styles vary in the seminars:
== Group 1 (positive reinforcement): praise participants
== Group 2 (punishment): give verbal punishment
== Group 3 (no motivator): give neither praise or punishment, i.e. give no feedback at all.

Manipulated variable or independent variable: the motivator (positive reinforcement, punishment or no motivator).
Interested outcome or dependent variable: statistical ability, to be measured via a statistics exam after the last seminar.
Assumption: the scores will depend upon the type of teaching method used (the independent variable).
Inclusion of the ‘no motivator’ group: proposed cause (motivator) is absent, and we can compare the outcome in this group against the two situations in which the proposed cause is present.

If the statistics scores are different in each of the motivation groups (cause is present) compared to the group for which no motivator was given (cause is absent) then this difference can be attributed to the type of motivator used.
In other words, the motivator used caused a difference in statistics scores.

Monday 5 June 2017

Proposal Defence

The key problem faced by my students in Proposal Defence sessions was the lack of defence.

When they were questioned or challenged, they noded their heads, accepting the comments and doubts held by panel examiners, as opposed to argue with the examiners and clarify doubts.

Another problem they normally had was the lack of coherence, esp between the problem statement, research objectives and research questions.

For those who don't have learning experience in foundation research course, i.e. having q Master's degree by completing course work instead of research project, this would make then having serious problem in research.

Sunday 4 June 2017

Reading notes: Statistics 101 (Part 1)

Learning from Andy Field, on my daily travelling between Jurong East MRT Station and Expo MRT Station

Overview
From an initial observation, explanations, or theories are generated for those observations, from which predictions (hypotheses) can be made. This is where the data come into the process because to test those predictions data are needed.
- First, collect some relevant data (i.e. identify things that can be measured) and then analyse those data.
- The analysis of the data may support the theory or give the cause to modify the theory.
- As such, the processes of data collection and analysis and generating theories are intrinsically linked: theories lead to data collection/analysis and data collection/analysis informs theories.

In the process of generating theories and hypotheses, data are important for testing hypotheses or deciding between competing theories. In essence, two things need to be decided: (1) what to measure, and (2) how to measure it.

To test hypotheses we need to measure variables.
Variables are just things that can change (or vary); they might vary between people (e.g., IQ, behaviour) or locations (e.g., unemployment) or even time (e.g., mood, profit, number of cancerous cells).

The key to testing scientific statements is to measure a proposed cause (the independent variable) and a proposed outcome (the dependent variable).

Independent variable: A variable thought to be the cause of some effect. This term is usually used in experimental research to denote a variable that the experimenter has manipulated.
== Predictor variable: A variable thought to predict an outcome variable. This is basically another term for independent variable .

Dependent variable: A variable thought to be affected by changes in an independent variable. You can think of this variable as an outcome.
==Outcome variable: A variable thought to change as a function of changes in a predictor variable; aka dependent variable.

Levels of measurement
Variables can be split into categorical and continuous, and within these types there are different levels of measurement:
1. Categorical (entities are divided into distinct categories):
1.1 Binary variable: There are only two categories (e.g., dead or alive).
1.2 Nominal variable: There are more than two categories (e.g., whether someone is an omnivore, vegetarian, vegan, or fruitarian).
1.3 Ordinal variable: The same as a nominal variable but the categories have a logical order (e.g., whether people got a fail, a pass, a merit or a distinction in their exam).

2. Continuous (entities get a distinct score):
2.1 Interval variable: Equal intervals on the variable represent equal differences in the property being measured (e.g., the difference between 6 and 8 is equivalent to the difference between 13 and 15).
2.2 Ratio variable: The same as an interval variable, but the ratios of scores on the scale must also make sense (e.g., a score of 16 on an anxiety scale means that the person is, in reality, twice as anxious as someone scoring 8).

Two measurement-related issues
1. Standard units of measurement
2. Difference in results between studies.

One way to try to ensure that measurement error is kept to a minimum is to determine properties of the measure (validity and reliability) that give us confidence that it is doing its job properly.
Validity: whether an instrument actually measures what it sets out to measure. Reliability: whether an instrument can be interpreted consistently across different situations.

Criterion validity: whether an instrument measures what it claims to measure through comparison to objective criteria.
- In an ideal world, you assess this by relating scores on your measure to real-world observations.

== Concurrent validity: a form of criterion validity where there is evidence that scores from an instrument correspond to concurrently recorded external measures conceptually related to the measured construct.

== Predictive validity: a form of criterion validity where there is evidence that scores from an instrument predict external measures (recorded at a different point in time) conceptually related to the measured construct.

- Assessing criterion validity (whether concurrently or predictively) is often impractical because objective criteria that can be measured easily may not exist.
- With attitudes it might be the person’s perception of reality rather than reality itself that you’re interested in.

Content validity: evidence that the content of a test corresponds to the content of the construct it was designed to cover.

Validity is a necessary but not sufficient condition of a measure.
A second consideration is reliability, which is the ability of the measure to produce the same results under the same conditions.
To be valid the instrument must first be reliable.
The easiest way to assess reliability is to test the same group of people twice: a reliable instrument will produce similar scores at both points in time (test–retest reliability).
Sometimes, however, you will want to measure something that does vary over time.

Experiences of applying for IRB

In my PhD, Warwick University had established a rigorous process and standard for ethical approval in educational studies. So applying for the approval is a norm and nobody thinks it as a burden.

However, when I returned to Malaysia where IRB application was not there, I tried to retain the practice and attitude for ethics in education research.

Now when I am working in SUTD, I found academics and staff are scared by the tedious process and revision needed in applying IRB. The process in Singapore institutions is indeed more complicated. And people who don't see the benefits of doing it felt frustrated with the process and rejection.

This prompted the thought of challenges faced if UPSI is going to impose IRB. We need a dedicated and strict team to implement this.