"The scale of measurement of data is one of the things that determines the statistical test that can be used to analyze that data. In this video, Dr. Terry Shaneyfelt reviews what variables are, and the scales of measurement (nominal, ordinal, interval, and ratio)" you should be familiar with.
"There are two types of data we encounter, categorical and quantitative data, and they likewise require different types of visualizations. Today we'll focus on bar charts, pie charts, pictographs, and histograms and show you what they can and cannot tell us about their underlying data as well as some of the ways they can be misused to misinform."
"Today, we’ll also introduce the normal (or bell) curve and talk about how we can learn some really useful things from a sample's shape - like if an exam was particularly difficult, how often old faithful erupts, or if there are two types of runners that participate in marathons! "
"Today we’re going to talk about measures of central tendency - those are the numbers that tend to hang out in the middle of our data: the mean, the median, and mode. All of these numbers can be called “averages” and they’re the numbers we tend to see most often - whether it’s in politics when talking about polling or income equality to batting averages in baseball (and cricket) and Amazon reviews. Averages are everywhere so today we’re going to discuss how these measures differ, how their relationship with one another can tell us a lot about the underlying data, and how they are sometimes used to mislead."
"Today we’re going to talk about how we compare things that aren’t exactly the same - or aren’t measured in the same way. For example, if you wanted to know if a 1200 on the SAT is better than the 25 on the ACT. For this, we need to standardize our data using z-scores - which allow us to make comparisons between two sets of data as long as they’re normally distributed. We’ll also talk about converting these scores to percentiles and discuss how percentiles, though valuable, don’t actually tell us how “extreme” our data really is."
"Test statistics allow us to quantify how close things are to our expectations or theories. Instead of going on our gut feelings, they allow us to add a little mathematical rigor when asking the question: “Is this random… or real?” Today, we’ll introduce some examples using both t-tests and z-tests and explain how critical values and p-values are different ways of telling us the same information."
Learn how to compare a P-value to a significance level to make a conclusion in a significance test.
Introduction to Type I and Type II errors in significance testing. Significance levels as the probability of making a Type I error.
Dr. Terry Shaneyfelt discusses what a p-value is, how to interpret it, and what a p-value can't tell you about a study.
Dr. Terry Shaneyfelt discusses statistical power, how it is determined, and how it relates to sample size.
"We're going to finish up our discussion of p-values by taking a closer look at how they can get it wrong, and what we can do to minimize those errors. We'll discuss Type 1 (when we think we've detected an effect, but there actually isn't one) and Type 2 (when there was an effect we didn't see) errors and introduce statistical power - which tells us the chance of detecting an effect if there is one. "
"Today we're going to talk about p-hacking (also called data dredging or data fishing). P-hacking is when data is analyzed to find patterns that produce statistically significant results, even if there really isn't an underlying effect, and it has become a huge problem in science since many scientific theories rely on p-values as proof of their existence! Today, we're going to talk about a few ways researchers have "hacked" their data, and give you some tips for identifying and avoiding these types of problems when you encounter stats in your own lives."
Introductory Vvdeos taught by faculty at the Imperial College of London. You may need to create an account in Coursera to access these free videos.
"This module starts by introducing the distinction between association and causation, which is critical not only for epidemiology, but for research in general. Subsequently, you will learn all the main measures epidemiologists use to quantify association; mainly risk and rate differences and risk, rate and odds ratios. Over the course of this module, you will develop the skills to calculate and interpret measures of frequency. This is not enough by itself though, so you will also learn to select the most appropriate measure depending on the research question and the availability of data."
Dr. Terry Shaneyfelt "describes how relative risk is calculated from a cohort study.
Dr. Terry Shaneyfelt demonstrates two methods to calculate the commonly used measure to report outcomes in RCTs - the Relative Risk Reduction.
Relying on RRR alone in making clinical decision can be misleading. Dr. Terry Shaneyfelt will outline why in this video.
"One of the problems with the way we discuss health interventions is that we see them in black and white. Something is either good for you or bad for you. Things are rarely that simple, though. Moreover, there's "good for you" and "GOOD FOR YOU". How do you tell the difference? Watch and learn."
Dr. Terry Shaneyfelt describes two methods to calculate a patient specific NNT. The patient specific NNT should be calculated when your patient's risk for an outcome differs (higher or lower) from those in the study.
"...another problem is that a lot of ... therapies are anything but benign. They come not only with costs, but also with side effects or problems. We can quantify harms, too. Watch and learn about NNH!"
"Knowing how to interpret an odds ratio (OR) allows you to quickly understand whether a public health intervention works and how big an effect it has. For example, how effective is the flu vaccine in preventing people from getting the flu? The video explains how to calculate and interpret an OR, and decide whether it indicates a positive or negative outcome."
"Odds ratios are the measure of association in a case control study. This video demonstrates the calculation of the OR"
"Confidence intervals allow us to quantify our uncertainty, by allowing us to define a range of values for our predictions and assigning a likelihood that something falls within that range. And confidence intervals come up a lot like when you get delivery windows for packages, during elections when pollsters cite margin of errors, and we use them instinctively in everyday decisions. But confidence intervals also demonstrate the tradeoff of accuracy for precision - the greater our confidence, usually the less useful our range."
"Dr. Terry Shaneyfelt describes modes physicians use to make diagnoses, including pre-test and post-test probabilities. Test and treatment thresholds are described and how they are used to decide on testing and treatment."
Dr. Terry Shaneyfelt describes the systematic approach to choosing a diagnostic test incorporating pretest probability, the role of testing, and test characteristics
Dr. Terry Shaneyfelt describes how to calculate sensitivity from numbers given in a diagnostic test study.
Dr. Terry Shaneyfelt describes how to calculate the specificity of a test from a diagnostic test study.
Dr. Terry Shaneyfelt describes how to calculate the probability that a patient has disease when they have a positive test.
Dr. Terry Shaneyfelt describes how to calculate the probability a patient doesn't have disease when they have a negative test.
Dr. Terry Shaneyfelt describes the role of likelihood ratios in diagnostic testing. After this overview of the concepts, watch the next two videos to review the calculations.
Studies are combined in a meta-analysis using a common summary outcome measure. In this video Dr. Terry Shaneyfelt reviews commonly used summary outcome measures.
Dr. Terry Shaneyfelt discusses how to interpret the information contained in a typical forest plot.
What are Meta-Analyses, and how to you interpret the results?