• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

How the Internet is changing science

vacognition

Student
Joined
May 19, 2007
Messages
31
I said "science" in my title, but what I meant was the science of thinking (psychology, neuroscience, linguistics, etc.). Which is what I think many of the readers of this forum are interested in, anyway.

A little while back, I wanted to do an experiment about birth order effects. That is, does birth order affect your personality? It's a popular idea, but evidence has been hard to obtain. For me, evidence was really hard to obtain, because I wasn't affiliated with a university at the time and so I didn't have access to research subjects (Psych 100 students are the psychologist's equivalent of the lab rat). Luckily, one of my parents was teaching Psych 100 that year, so we put in the necessary paperwork at the university, and tested people from the class.

More recently, I wanted to to another birth order study. I just put it on the Internet (http://coglanglab.org/BirthOrder). With any luck, I'll get all the data I need from interested volunteers, rather than captive-audience undergraduates. (I am now affiliated with a university, but I had other reasons to put this study on the Web; see below.)

Web-based experimentation is becoming very common. Recently, many readers of this forum helped out with another experiment I did at the Visual Cognition Lab (see here for more experiments). This story illustrates many of the reasons.

1) You broaden your subject pool. Are you interested in studying moral cognition? Do you really think your local undergraduates (Harvard undergrads in my case) are a representative sample? What if you want to compare people from different ethnic backgrounds? All this is possible online, but darn hard in a lab setting.

2) It's cheap. This is probably why it's more common at universities with less money. Normally, you need money to do experiments. This means science may become more democratized and less centralized.

3) The large numbers of participants you can get online allow you to do studies that aren't otherwise possible. I started online research because one study I wanted to do required 2000 subjects. Even if I could get 2000 undergrads to come to the lab, I don't feasibly have the time to test all of them. Online, this is no longer a problem. There are different reasons you might need many subjects. In that case, I wanted to see how participants perform the very first time they see a particular thing. You can only see something for the first time once. This greatly increases the number of participants needed. In another experiment, I am interested in the properties of English verbs. There are about 1000 that I want to test, and individual participants get pretty bored after the first couple dozen. If each person only rates 25 verbs and I need 20 ratings per verb...you can do the math.

Basically, I think that the Internet is shaping research into behavior and thinking in much the same way it has brought us blogs and wikipedia. And I thought that might be an interesting conversation topic for this forum. Thoughts?
 
I was discussing this very topic with a friend this week. He does a lot of online studies and I was saying the objection of bias is becoming less and less relevant as (in the UK at least) the internet nears complete penetration.

Many people criticise online experiments as being self-selecting, but given the overwhelming availability of the internet now, I don't see how that's any different to stopping someone in the street.
 
I wondered about that. I also wonder how you guard against hoaxers, sock puppets, etc.

It is true that online participants are a non-random sample. The question, though, is whether they are a less-random sample than the alternative. Most non-medical human subjects are undergraduate psychology students. That's not a random sample, either.

The other question is whether it matters. If you are studying views about sexuality, your subject pool will probably influence the outcome. If you are studying mid-level visual processes, it probably won't. Whether there is a subject bias is an empirical question and can be tested. Sometimes it is, and sometimes the subject population does make a difference.

I can't speak for everybody in the field, but that's how I was taught to deal with it.

As far as hoaxers online, they are a problem in the lab as well. Our lab-based subjects are either undergraduates fulfilling class requirements or paid subjects. A few in both groups are not motivated and just press buttons randomly. In fact, a few empirical studies have found that Web-generated data is more reliable, which makes sense if you think about it. Most participants for Web-based experiments are truly interested volunteers. Also, the experiments are short (5 min. instead of the typical 30-60 min.), and if you get bored, you can quit.
 
It is true that online participants are a non-random sample. The question, though, is whether they are a less-random sample than the alternative. Most non-medical human subjects are undergraduate psychology students. That's not a random sample, either.

The other question is whether it matters. If you are studying views about sexuality, your subject pool will probably influence the outcome. If you are studying mid-level visual processes, it probably won't. Whether there is a subject bias is an empirical question and can be tested. Sometimes it is, and sometimes the subject population does make a difference.

Exactly! Thanks for saying this, it's something that drives me crazy. What is a 'random' sample anyway? Anyone who agrees to be part of a study is automatically self-selecting. The only way to get a true random sample would be to collect data by stealth.
 
Exactly! Thanks for saying this, it's something that drives me crazy. What is a 'random' sample anyway? Anyone who agrees to be part of a study is automatically self-selecting. The only way to get a true random sample would be to collect data by stealth.

there is a difference between targeted asking and generally asking for volunteers - unless your general request reaches all relevant demographics with equal weight, and all relevant demographics are equally inclined to volunteer [distinct from agreeing to participate when asked]
as long as people are aware of its limitations [and thus control for them], the internet is an excellent source for data collection however...
 
there is a difference between targeted asking and generally asking for volunteers - unless your general request reaches all relevant demographics with equal weight, and all relevant demographics are equally inclined to volunteer [distinct from agreeing to participate when asked]
as long as people are aware of its limitations [and thus control for them], the internet is an excellent source for data collection however...

But as long as your total sample size is sufficient and you collect demographic data then it's no issue. What is important is that your respondent's demographics match either your target, or the general population. And the population is not equally weighted.

I do this all the time for market research, and unlike scientific research, the results of market research are tested in the market and proven to be robust.
 
Last edited:
But as long as your total sample size is sufficient and you collect demographic data then it's no issue.

I do this all the time for market research, and unlike scientific research, the results of market research are tested in the market and proven to be robust.

sure - if you're controling for any potential skew through collecting the data that might be responsible for that skew [such as age, socio-economic status etc etc dependent upon the research] then that's fine. The only problem is that this requires knowing what's going to skew the data and effectively this requires knowing what you're looking to find out :)

Normally this isn't a problem - but it does require some knowledge of who's going to be taking the test, and is contingent on estimations of what you're expecting to find. Say for a reaction time test, one might decide that controling for age would be sufficient but say your web test got picked up by a sportsmen's forum - this could fly under the radar of your demographic data collection and skew data towards the faster end of reaction range....of course, if you added a question asking for a rating of one's sporting ability this could prevent this...

the challenge with web based research is how to deal with over-represented niches whereas with non-web based research it's often a challenge to not over represent the majority....
 
Last edited:
sure - if you're controling for any potential skew through collecting the data that might be responsible for that skew [such as age, socio-economic status etc etc dependent upon the research] then that's fine.

I was assuming that anyone conducting research at professional or academic level would be doing so, but you are right to flag it up. There are always people who don't know what they are doing :D
 
I was assuming that anyone conducting research at professional or academic level would be doing so, but you are right to flag it up. There are always people who don't know what they are doing :D

true - i'd expect professionals to have a pretty good idea of all this - and if they didn't, well they should probably think about a new career :D
 
Basically, I think that the Internet is shaping research into behavior and thinking in much the same way it has brought us blogs and wikipedia. And I thought that might be an interesting conversation topic for this forum. Thoughts?

I think it would be nice if people shared their date so we didn't keep getting people trying to do near identical surveys of wikipedians.

Imapact on research is only one issue. Papers that are easy to acess get cited more and some people are starting to think about this.
 
Tkingdoll said:
Many people criticise online experiments as being self-selecting, but given the overwhelming availability of the internet now, I don't see how that's any different to stopping someone in the street.
Stopping people on the street can be done arbitrarily, whereas participating in online studies is voluntary, and therefore possibly self-selecting. For example, I'll rarely take any sort of online test that has more than about 20 questions.

So we need to do an online study impatience study!

~~ Paul
 
Stopping people on the street can be done arbitrarily,

People who stop in the street in response to survey-takers are volunteering in exactly the same way someone who agrees to take part in an online study is.

The only difference is, in street studies you can target people who 'look' like the type you want, whereas in online studies you have to filter them out, but the principle is exactly the same. If you want to take part in a study you will stop for the person in the street, or you will click the link.

There are 'clipboard avoidance' psychologies in street studies that are not yet an issue online, mainly because you know what the study is about before you make your decision. You don't get that with a street study - people will just avoid you or not.
 
People who stop in the street in response to survey-takers are volunteering in exactly the same way someone who agrees to take part in an online study is.

The only difference is, in street studies you can target people who 'look' like the type you want, whereas in online studies you have to filter them out, but the principle is exactly the same. If you want to take part in a study you will stop for the person in the street, or you will click the link.

There are 'clipboard avoidance' psychologies in street studies that are not yet an issue online, mainly because you know what the study is about before you make your decision. You don't get that with a street study - people will just avoid you or not.

but you can appreciate the difference between proactive and receptive? Street surveys simply require people receptive to being stopped in general, online surveys require people to be proactive in choosing to answer a specific survey. They will therefore produce quite different data sets if not controlled.
 
but you can appreciate the difference between proactive and receptive? Street surveys simply require people receptive to being stopped in general, online surveys require people to be proactive in choosing to answer a specific survey. They will therefore produce quite different data sets if not controlled.

Yes, only if we assume that those are two different types of people. I'm not sure that it is, or will be for much longer.

In street research, you need to be receptive to being stopped, then you need to choose to answer the specific survey. You can be receptive to being stopped but decide against continuing once you know what the survey is. Actually most people carry on, but you can eliminate yourself from the process if you choose.

Online, you need to be receptive to being stopped too (for example through a popup on a news website), but the difference is everyone is stopped and only those who want to proceed do. The rest ignore the request and carry on.

I don't see much difference between someone stopping me in the street and asking if I want to complete a survey, and a popup stopping me in a website and asking me if I want to complete a survey.

As Paul said, this does require some further study but my retail clients already see the internet as the new high street, and their online consumer surveys little different from clipboard. It's much easier to get a representative sample, it's cheaper to do, and the results are not considered less valid, because (as I mentioned before), they are borne out in the success of the marketing activity based on them.
 
Last edited:
Yes, only if we assume that those are two different types of people. I'm not sure that it is, or will be for much longer.

In street research, you need to be receptive to being stopped, then you need to choose to answer the specific survey. You can be receptive to being stopped but decide against continuing once you know what the survey is. Actually most people carry on, but you can eliminate yourself from the process if you choose.

Online, you need to be receptive to being stopped too (for example through a popup on a news website), but the difference is everyone is stopped and only those who want to proceed do. The rest ignore the request and carry on.

I don't see much difference between someone stopping me in the street and asking if I want to complete a survey, and a popup stopping me in a website and asking me if I want to complete a survey.

As Paul said, this does require some further study but my retail clients already see the internet as the new high street, and their online consumer surveys little different from clipboard. It's much easier to get a representative sample, it's cheaper to do, and the results are not considered less valid, because (as I mentioned before), they are borne out in the success of the marketing activity based on them.

well ok perhaps we're just thinking about different types of online survey - the type i had in mind is that which requires you of your own volition to choose to enter a website for the purpose of participating in a test you've heard about - ie a proactive form of self-selection. I can appreciate that online popups asking for 10 minutes to complete a short survey have a good deal of similarity to say street surveys - but i've always assumed that no one actually clicked on them - perhaps they do :)
 
well ok perhaps we're just thinking about different types of online survey - the type i had in mind is that which requires you of your own volition to choose to enter a website for the purpose of participating in a test you've heard about - ie a proactive form of self-selection. I can appreciate that online popups asking for 10 minutes to complete a short survey have a good deal of similarity to say street surveys - but i've always assumed that no one actually clicked on them - perhaps they do :)

Oh they do!

But what you are identifying is the gap between market research and scientific research. The sort of studies you were thinking of generally fall into the latter category.

However, I do know psychologists who place newspaper ads, so no reason why they can't place internet ads.

This discussion has inspired me to look into a specific aspect of this subject, I will keep you informed of details/progress if and when it turns into a project.
 
Tkingdoll said:
People who stop in the street in response to survey-takers are volunteering in exactly the same way someone who agrees to take part in an online study is.
I don't think it's the same sort of volunteering, as Andyandy says.

Oh they do! [click on popup surveys]
I don't. Again, there is self-selection going on in both situations we're discussing, but I think it is a different set of people in each case. We need to be careful to analyze who is taking these surveys.

~~ Paul
 
I don't think it's the same sort of volunteering, as Andyandy says.

~~ Paul

Why isn't it?

I think you are underestimating the penetration of the internet. You will get differences in demographic weight but you ask for that data in both types of survey so you know exactly what you are comparing.

What exactly do you think is different about someone who sees a link to an online survey and says "yes I will" and someone who gets asked "want to do a survey?" in the street and says "yes I will"? But lets say you're right, let's say it is a different type of person. So what? What is more valid about the latter category? Suddenly, research is only valid if the questions are asked face-to-face? The ONLY benefit of face-to-face over online is controlling the environment, and that doesn't really apply to street research anyway. If you need to watch your subjects' faces, fine. But for that you need to get them into a room anyway.

Besides, you assume that street surveys all attract the same type of person - but they rely entirely on location. You will get a completely different data set from one street than another.

Just because YOU don't click on pop-up surveys, that doesn't mean no-one does. I am going to assume you weren't suggesting that though ;)
 
Last edited:
Tkingdoll said:
I think you are underestimating the penetration of the internet. You will get differences in demographic weight but you ask for that data in both types of survey so you know exactly what you are comparing.
I'm not saying any skew is due to some folks having Internet access and some not. I'm saying it's due to the sort of people who like to answer Internet questionnaires and those who don't.

What exactly do you think is different about someone who sees a link to an online survey and says "yes I will" and someone who gets asked "want to do a survey?" in the street and says "yes I will"?
It's a completely different sort of social interaction. In the first case, there is none, whereas in the second case a friendly human being is asking the person to cooperate. I'm thinking, for example, that the more elderly among us might like the second situation better.

But lets say you're right, let's say it is a different type of person. So what? What is more valid about the latter category? Suddenly, research is only valid if the questions are asked face-to-face?
Hang on! I never said anything about validity. I simply said you might be getting a different sort of crowd and that might affect the results.

Just because YOU don't click on pop-up surveys, that doesn't mean no-one does. I am going to assume you weren't suggesting that though.
Yes, but is there a relevant skew in the sort of person who likes to click on pop-up surveys?

It's no longer a question of whether a person has a telelphone. Now it's a question of whether a person likes to cooperate with pollsters calling on the telephone. Or interrupting them on the street. Or making them click buttons on a web page.

~~ Paul
 

Back
Top Bottom