I won't be filling out your survey and here is why:
Do you believe Men and Women have equal rights?
This question can be interpreted in two ways: 1. In practice, men and women have the same rights. 2. Men and women intrinsically have the same rights, but perhaps not in practice.
Both of these definitions are common interpretations of this statement, equally common in fact.
If someone answers yes to this question, it could mean one of these two things of which you will not be able to distinguish and thus will lead to faulty and unreliable data.
Do you believe one gender has more rights than the other?
Same thing as above. Having rights could either mean that they already legally have those rights, or perhaps they are intrinsically guaranteed those rights but in practice it might be unequal.
You need to be very specific when asking survey questions or else your data will be skewed.
Do you believe that women/men are objectified?
These two questions are vague and will not yield useful data. First of all, you need to understand the fundamental difference between objectification and sexual objectification. I think you intend to talk about the latter, because the former is more of a worker's rights issue than it is a gender equality issue.
Please redo this survey and rephrase these questions, otherwise you are not getting accurate data because these questions can be too broadly interpreted. Until you put some more serious thought into your survey, I won't be participating and I recommend that no one else participate either.
I would say your knowledge of statistics is non-existent. 100 can be a lot or it can be too little - it depends on other factors like the variability and how strong you want your finding to be.
(Links are only examples)
FYI, standard research ethics say that you should post about the ethical impact of this survey, confidentiality concerns, what happens to the info, how it is used, etc.
http://www.qualtrics.com/blog/ethical-issues-for-online-surveys/
I'm pretty sure the university has a license which entitles you to have a free qualtrics account:
http://www.qualtrics.com/academic-solutions/university-of-california-berkeley-psychology-department/
It won't help get the data for this survey, but it will make the next one easier....
I don't know how advanced your class in statistics is, but you could have devised some Likert-scales to add some flavour. Be sure to measure the scale in a Cronbachs Alpha analysis,anything under .8 is rubbish. Also, for further surveys, I feel that qualtrics is a more powerful free alternative to Google surveys. http://www.qualtrics.com
>the observers said Mr. Putin had faced no real competition and unfairly benefited from lavish government spending on his behalf.
That's true. But that means that the campaign wasn't totally fair. Because Putin already have some sort of cult personality. That doesn't mean fraud.
>And yet the statistical examination of recent elections shows extreme voting abnormalities in certain districts.
Well abnormalities doesn't mean fraud. For example, places where KOIBs were installed Putin got less votes and this is usually presented as an abnormality (possible fraud). But when studies closely we have a fine example of 1936 Elections telephone poll, because KOIBs were installed mostly in big cities like Moscow, while in villages they counted as always by hand - and by all social inquiries rural population are the most ardent Putin's supporters. While the major opposition lives in big cities.
And such are many examples. Your article does not explain their methods, so I cannot examine it.
But I'll say that Putin got 60% in the exit polls, and in final results he got 63%. So the campaigning could have been all unfair you want - there have been no elections fraud in 2012 presidential elections.
But I'm not some bot and I won't deny in 2011 in Duma elections there was fraud and after that months of public protests, hundred thousands of men on the streets - and because of that the system was improved.
And the next elections will be even cleaner. And the Western media will still taint them as fraudulent and unfair (because their supporters will lose) - and easily recognize elections in some place much more sinister like Ukraine.
Verifying a question with data doesn't make that question more valuable. I don't understand the motivation behind taking standard marketing questions and using them in an MBTA questionnaire. It sounds like they outsourced the survey to a cheap marketing company (http://www.qualtrics.com/) that did nothing to refine their questions to the ~500 person "panel." I would expect to see this kind of thing for a bottle of shampoo, or a bag of chips, not an essential city service.
Furthermore, details on that panel, like where they live, their occupation, and the usual time of day in which they use the service are missing.
Interesting point. Similar to 1936?
http://www.qualtrics.com/blog/the-1936-election-a-polling-catastrophe/
> For this cycle, they had polled a sample of over 2 million people based upon telephone and car registrations. The results they obtained predicted Landon would win in a landslide with over 57% of the popular vote. ... During the Depression, not everyone could afford a car or a telephone. Those who did were usually wealthier, and therefore less likely to be directly helped by “New Deal” programs. As a result, this group was more likely to disapprove of Roosevelt than the general population. ... This projected Landon would win 370 out of 539 possible electoral votes. Instead, the actual results gave a very different picture.
So we're not polling cell phones as much, and not getting good poll results?
Both of your questions have to do with the limits of Google Docs (I'm guessing you're using Google Forms to design a survey). Can a Form you create give unique codes to people at the end? Can Forms direct workers to different videos (branch logic in Qualtrics)? I don't know the answers to these, but they're both questions about using Google Forms, not MTurk.
The Excel/Spreadsheet option looks great for flagging high and low results. What if I wanted to have a "slider" from left to right next to each result that showed where the marker was compared to the range such as what is depicted here: http://www.qualtrics.com/university/researchsuite/basic-building/editing-questions/question-types-guide/graphic-slider/
It's not the law that's the problem. It's the implementation. It's always the implementation that's the hard part. Don't let perfect be the enemy of the good. With voting tests, the risks are just too damn high for manipulation. Even very minor decisions on wording can have huge effects on results, intentional or not. That's why it's so dangerous. It's too much influence in too small a group. It's safer -- and I'll say more effective -- to let "uneducated" voters cast ballots because they'll naturally cancel each other out.
How do you think 538 so accurately predicts elections? How do you think probability and statistics work?
Here's an article to help you understand how it works: http://www.qualtrics.com/blog/determining-sample-size/
So, there is a degree of probability to which this is accurate. Let's say your mean is 55% and your variance is 2%. That means there is a 95% probability that the actual number is between 51%-59%. People risk their lives on worse odds than this.
Try qualtrics next time
You can customize the survey way more and it gives you the data in more ways than survey monkey. Especially if you're a data nerd, you'll love this survey site way more!
Plus, I am pretty sure there is no charge after 100 responses.
Take some time to mess around and see all the different ways you can customize the questions and how they're asked/recorded, there's a lot on there, very handy survey tool for school also.
Here is an outline of the sample size calculation.
http://www.qualtrics.com/blog/determining-sample-size/
Here is another.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2876926/
I rather like stuff from the NIH.
You might want to create this via SurveyMonkey or Qualtrics, it would probably be easier than sorting through comments for data collection. Plus, people can submit their answers anonymously instead of having it tied to their reddit handle.
On that note, I doubt mine would be too helpful as I took the SAT before the scoring guidelines changed, so my score would look low.
http://www.qualtrics.com/university/researchsuite/basic-building/look-and-feel/mobile-device-look-and-feel/ According to their website it looks like they already have more mobile friendly options, and are working on others, but the options don't exist for all the question types (including the question type/format I had to use for this survey).
Hopefully for later in the semester I'll be able to use more mobile friendly questions, but for now this is my first survey ever and all about whether or not I can appropriately show/order raw data (statistics class).
Thank you so much for your feedback though! I don't currently own a mobile device and so I wouldn't have any idea what it looked like or how it responded if someone else didn't tell me. :)
Here is a sample size article. Something so big as Dota with millions of games will require a pretty big sample size for even a 90% certainty.
Well, polling every single person in the country is not only going to make your survey nearly impossible to actually finish, it's collecting far more data than you need to get basically the same results. For a nationwide survey, a sample of just under 400 people is enough to get the same results with a margin of error of under 5% as long as that sample is proportionally representative of the population.
There's actually some vaguely complicated math that goes into determining how big of a sample is needed to represent a population, but essentially as your sample size increases, the likelihood that your sample accurately reflects the whole population increases. It's impossible to get a 100% accurate sample of a population unless you actually survey the entire population, but in most cases, a 2% or 5% margin of error is perfectly acceptable.
This article talks about how people go about choosing a sample size for their surveys if you want more info.
Aquilo é um "Q" cortado.
https://ulfp.qualtrics.com/WRQualtricsShared/Brands/ulfp/favicon.ico
É o símbolo da qualtrics, os gajos que vendem a solução de questionários.
O "Q" é muito parecido com o do quora.
And you see "everyone equal" in there where?
But I suppose I'd agree with the people in this poll in "what they would like it to be" http://www.qualtrics.com/innovation-exchange/wp-content/uploads/2014/10/article-dan-ariely-wealth-scale.jpg
So sorry if this post is bothering you. I do really need respondents to complete the survey to pursue my master degree. This post is not spam. The website is online survey software called "Qualtrics" as follows: http://www.qualtrics.com/ While the link I posted starts with strathbusiness because it is an account under the university of Strathclyde. Here is the link for your reference. http://www.strath.ac.uk/business/
I do apologize everyone again. I didn't mean to disturb anybody; I just ask for help. Sorry for any inconvenience.
Take a look at Qualtrics. Founded around 2002, raised a Series A of $70m in 2012, and another $150m in 2014. Valued at over $1b. Revenue of somewhere between $50-$100m I would guess.
I actually looked this up, because I would tend to agree with you. Apparently for an unknown, or very large population, a decent sample size is 385. So it's not terrible.
quick Google search shows: http://www.qualtrics.com/blog/determining-sample-size/
not really, if you understand statistics, you know that 385 samples are enough for any population size that is either really big or unknown in order to get 95%, .5 standard deviation, with +-5% error.
so, the only problem is how to determine whether you have appropriate samples that represent the population, not the total number itself.
learn your statistics brah.
Perfect analogy.
But, this leads to several flaws of surveys too. Some examples. Like soup, sometimes the sample size doesn't represent the whole soup. Sometimes the bottom is the best! So it raises questions on survey bias. When reviewing statistical data, it's always important to review who was tested. Was it a certain region? Age group? Location (was it a church parking lot)? Etc.
This might be a little something that TurkOpticon could do (or a plug-in could be written in JavaScript to do); I'll bet that Qualtrics has some way to import a file into a survey...
"The Import Survey option allows you to upload a .QSF or .TXT file of a survey into Qualtrics."
7 games is nearly half a season's worth of games, but I think we're looking at the wrong sample to measure, so let's think about this through rushing attempts. In 7 games he's had 109 attempts. At this case, I think it would be fair to suggest that he'd be attempting 250 rushes on the season, and 109/250 is around 44%. For me, that's a decent sample size, but you're probably looking for something bigger, so I was looking at using actual statistics to figure out a reliable sample size.
Calculating a reliable sample size for statistics would be too difficult (and somewhat useless), since Bernard would probably have to rush close to 300 times to get a reliable sample for a 95% confidence level (which he wouldn't be close to doing in a regular season). See here. The margin of error or confidence level would have to be decreased in order to get the proper sample size, and even at a 90% confidence level with .5 standard deviation, we'd need 270 rushes for a proper sample size.
I think that most people now who want more functionality use qualtrics. I've never used it personally, but they'd be your competition if you tried to do something like this.
The main feature I need in my web-based surveys is HIPAA compliance, so I write them myself (python, SQL) and host them under multiple layers of security on our lab's server.
Note: This is only applicable to those wantrepreneurs and entrepreneurs who are still in college (or are currently faculty I'd imagine as well).
When you mentioned Survey Monkey it reminded me that you would be very surprised at how many resources you have access to when you're a student.
A couple months ago, I wanted to send out a marketing survey that was more than 10 questions long (Survey Monkey's limit), and after some prodding around on my University's website, I found that I had full access to the premium Qualtrics online surveying tool.
I can't participate in this survey as I'm not in the USA, but I assume that this survey has "Anonymize Responses" turned on?
http://www.qualtrics.com/university/researchsuite/advanced-building/survey-flow/anonymize-responses/
It sounds like you are looking for syndicated research on craft breweries, which is going to cost a bit of money. It also probably will only help you a little bit with investors.
Even more effective (but more expensive) is custom market research that focuses on what your brand of craft beer means to consumers, if it fits their needs, etc.
My low cost recommendation to you is Google Consumer Surveys, Survey Monkey, or Qualtrics, They all cost a relatively small amount of money, but they will give you good information for potential investors if you do your survey right. Comment back if you need help with your survey.