Creating Your Own Measure

Tips for Writing Good Questions

1. Avoid Leading Words / Questions
Subtle wording differences can produce great differences in results. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question. Strong words, such as “force” and “prohibit” represent control or action and can bias your results.

Example #1 

The government should force you to pay higher taxes.

No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. Wording alternatives can be developed. How about simple statements such as:

The government should increase taxes, or the government needs to increase taxes.

Example #2 

How would you rate the career of legendary outfielder Joe DiMaggio?

This question tells you Joe DiMaggio is a legendary outfielder. This type of wording can bias respondents. How about replacing the word “legendary” with “baseball” as in: 

How would you rate the career of baseball outfielder Joe Dimaggio?

2. Give Mutually Exclusive Choices
Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents. Review your survey and identify ways respondents could get stuck with either too many or no correct answers. Revise accordingly.

Example #1

What is your age? ___12-15 ___15-18 ___18-21 ___ 21-25

What answer would you select if you were 15, 18 or 21? Questions like this will frustrate participants.

Example #2

What type of vehicle do you own?  ___Van   ___SUV  ___Sedan

This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle or no vehicle at all?

3. Ask  Direct Questions 

Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.Test your survey with five friends and check to see if responses are on topic.

Example #1

What suggestions do you have for improving Tom’s Tomato Juice?

This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.

Example #2

What do you like to do for fun?

Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.

4. Add a “Prefer Not to Answer” Option 

Sometimes respondents may not want or be able to provide the information requested.

  • Questions about income, occupation, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent. Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.
  • While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non- disclosure option.
  • Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.
  • Some types of demographic questions are very sensitive for some categories of respondents. So when in doubt, give respondents a PNA for the question.

What is your race? What is your age? Did you vote in the last election? What are your religious beliefs? What are your political beliefs? What is your annual household income?

The above questions should always include an option to not answer (e.g. “Prefer Not to Answer”).

Pilot Testing

Whenever creating a new measure or if you are altering someone's previously validated measure, you need to pilot test the measure before collecting your "real" data. This is to assure that your measure had acceptable internal reliability as well as to iron out any problematic questions you may have.

Here are some general guidelines to follow when piloting a new measure: 

  1. You should have a good reason for creating a new measure (e.g. a measure of the construct does not currently exist or is inaccessible). Generally speaking, it is recommended to use a previously validated measure (if one exists) rather than create a new one.
  2. Make sure that your items are derived from sound theory/ rationale.
  3. Look to similar measures in the field for ideas on items, or if none exists-- similar constructs.
  4. Ask experts in the field to look over your new items to provide you with feedback.
  5. Generally speaking, a measure of a single construct or a single sub-scale should have a minimum of 10 items in order to obtain adequate internal consistency reliability.
  6. When creating a new scale, it is highly recommended to begin with 3x the minimum number of questions, due to items low in reliability being deleted in the piloting process.
  7. Once you have a good amount of questions for your survey, have several different types of people take your survey.
  8. In the initial round, have friends/classmates (the more the better, but 5-10 would be sufficient) provide input. It is a good idea to have a mix of individuals take the questionnaire who understand issues surrounding self-reports and the nature of psychological measurement as well as those who do not. Some questions to ask this group include:
    1. How easy/difficult was it to take this survey? How long did it take to complete? 
    2. Were the instructions helpful and clear or were they lacking and/or vague? What did they think the survey was about? ***Don't ever give away the hypothesis or construct under investigation in order to prevent biased answering! Just tell participants that you are interested in their opinions or attitudes on ____(some broad topic).
    3. Were any questions confusing, such that the participant did not really understand what it was asking and therefore did not really know how to respond? Did any of the questions provoke a negative response, such as frustration or anger? Did participants feel that any question made them feel slightly awkward, uncomfortable, or offended in any way? If so, ask them to explain why/how in as much detail as possible. 
    4. Did participants feel any of the questions were sensitive in nature, private or intrusive? Did they personally feel that they answered dishonestly or in a socially desirable manner? If not, could they imagine that people with different backgrounds/experiences might potentially answer dishonestly or in a socially desirable manner? 
    5. What about the answer choices? Did participants find themselves wanting to select more than one option in a forced response question or that there was an answer they wanted to provide that didn't exist (for this reason, it is always a good idea to provide the option "other" with a blank for a free-form response for every question when piloting; if multiple people provide the same free-form response, make this an answer choice in the edited version).
    6. Finally, did participants feel that the questions were accurate in terms of what you hoped to measure (it is okay to reveal your hypothesis or construct after they complete the survey as long as they don't discuss with future participants)? Do they have suggestions for alternative ways of asking a question or additional questions that would better inform your construct?
  9. After making changes from the initial round, pilot the questionnaire with individuals who could be potential participants. That is, pilot participants who are similar in age and other key relevant demographics as your population of interest. Here you might aim for 15-30 participants and analyze the internal reliability of the measure. Again, do not bias your participants by indicating what your survey is intending to measure or the construct of interest before they complete the questionnaire!
  10. Finally, after removing items with poor reliability and any other items you discover do not properly or accurately measure your construct, you should have a reasonable scale ready for your main study! Ideally, internal consistency reliability above α=.85 is adequate, but aiming for above α=.90 is ideal.

Power Analysis for Sample Size

How many responses do you really need? This simple question is a never-ending quandary for researchers. A larger sample can yield more accurate results — but excessive responses can be pricey.

Consequential research requires an understanding of the statistics that drive sample size decisions. A simple equation will help you put the migraine pills away and sample confidently.

Before you can calculate a sample size, you need to determine a few things about the target population and the sample you need:

  1. Population Size — How many total people fit your demographic? For instance, if you want to know about mothers living in the US, your population size would be the total number of mothers living in the US. Don’t worry if you are unsure about this number. It is common for the population to be unknown or approximated.
  2. Margin of Error (Confidence Interval) — No sample will be perfect, so you need to decide how much error to allow. The confidence interval determines how much higher or lower than the population mean you are willing to let your sample mean fall. If you’ve ever seen a political poll on the news, you’ve seen a confidence interval. It will look something like this: “68% of voters said yes to Proposition Z, with a margin of error of +/- 5%.”
  3. Confidence Level — How confident do you want to be that the actual mean falls within your confidence interval? The most common confidence intervals are 90% confident, 95% confident, and 99%  confident.
  4. Standard of Deviation — How much variance do you expect in your responses? Since we haven’t actually administered our survey yet, the safe decision is to use .5 – this is the most forgiving number and insures that your sample will be large enough.

Okay, now that we have these values defined, we can calculate our needed sample size.

Your confidence level corresponds to a Z-score. This is a constant value needed for this equation. Here are the z-scores for the most common confidence levels:

90% – Z Score = 1.645
95% – Z Score = 1.96
99% – Z Score = 2.326

If you choose a different confidence level, use the Z-score table to find your score.

Next, plug in your Z-score, Standard of Deviation, and confidence interval into this equation:*

Necessary Sample Size = (Z-score)² * StdDev*(1-StdDev) / (margin of error)²

Here is how the math works assuming you chose a 95% confidence level, .5 standard deviation, and a margin of error (confidence interval) of +/- 5%.

((1.96)² x .5(.5)) / (.05)² (3.8416 x .25) / .0025

.9604 / .0025

384.16

-->385 respondents are needed

Voila! You’ve just determined your sample size.

If you find your sample size is too large to handle, try slightly decreasing your confidence level or increasing your margin of error – this will increase the chance for error in your sampling, but it can greatly decrease the number of responses you need.

*This equation is for an unknown population size or a very large population size. If your population is smaller and known, use the calculator above and follow the guidelines for Known Populations.

Tips for Online Surveys

Read this great blog post about Five Things you should NOT be Doing in Online Data Collection before beginning your project!!!

Creating a Survey in Qualtrics 

Pomona students can view the Qualtrics Tools page for tips and *create/login to account on Pomona Qualtrics page.

*Please note that use of the Qualtrics platform is a subscription-based service which is paid for at the institutional level by Pomona College. Therefore this service is only accessible to students, faculty and staff of Pomona College with an active Pomona College user account.
 

Everything you ever wanted to know about creating a Qualtrics Survey!

Creating a Project in MTurk 

Creating an Amazon MTurk Account:

  1. Visit the MTurk Requester Page (see Figure 1)
  2. Click the orange tab on the bottom right “Sign in to Create Project”
  3. If you already have an Amazon account, simply enter your username and password. If you do not, simply create a free account.
  4. Upon initial registration, Amazon will ask you about how you intend to use the program; simply answer the questions.
  5. Upon initial sign-in, Amazon will show you the resources page (see Figure 2), which should answer most of your questions.

Creating an MTurk Project:

  1. You will next be launched to the new project page (see Figure 3)
  2. In the tab “Enter Properties” you must:
    1. Enter a project name or copy survey link (e.g. Qualtrics survey)
    2. Describe survey (title, description, keywords)
    3. Set up your survey parameters (payment amount, sample size, provide time for taking survey, when you want data collection to be completed, time for auto-approval of payment)
    4. Determine whether you desire/need “Master Workers” (see Figure 4)
    5. Select inclusionary criteria for sample (2 are included in base price, additional criteria cost more—see Figures 5 & 6; indicating whether there is adult content)

*

Creating a Project in TurkPrime 

What is the difference between MTurk and TurkPrime?

TurkPrime is a technology company that optimizes many participant recruitment platforms for scientific research. Mechanical Turk (MTurk) is one platform that they work with. They also work with dozens of  market research platforms (Prime Panels). Each platform has its own participant pool, referred to as an opt-in panel. Participants on these panels are profiled on hundreds of variables. Invitations are sent via email and dashboards to specific participants, based on their demographic profile. Depending on the study, one or more platforms may be used, based on a study's needs. In cases where the target group is very hard to reach a study may be feasible only when combining the resources of multiple platforms. This allows for more specific and representative samples, as well as samples that could not otherwise be attained using MTurk. 

  1. Here is information about creating an account, which can be done here: TurkPrime.com
  2. Create your study, with one of three options:
    1. MTurk Toolkit Study
    2. Prime Panel Study
    3. Fully-Managed MTurk Study
comparison of 3 types of mturk accounts

TurkPrime Options

NEXT--> Run a pilot study