Origin has one, but only enables it during sales. Battle.net doesn’t have one. The Nintendo Eshop doesn’t either (admittedly they aren’t the best example of an online store). Google Play Store and Apple App Store don’t have them. The Kindle store doesn’t, and Amazon strongly pushes their “buy with one click” for products they sell on their site.
OK, I'm calling it: This is an A/B test. They are basically testing which version of the sales page ("motor" HP, "real" HP, no HP) leads to the most sales. Expect it to be consistent in probably 3-4 weeks again. In the meantime, you will all see different versions of the sales page.
For more on the topic: https://www.optimizely.com/split-testing/
Golem Alpha IV reveal (yagna v0.6.0, which enables mainnet utilisation*)* was not the mainnet launch for the New Golem. It was a releae candidate to be tested in production (TIP) and as such not widely announced -- just a blogpost; no gitcoin bounties nor hackathon this time, so that’s one reason why there’s not much traffic there yet. It was also not fully backward compatible with v0.5.0 and previous, so it is the second reason (we are still in alpha phase, but the next release -- the mainnet launch -- will be backward compatible with v0.6.0).
We just wanted to share our goodies with you: our most active and eager community members, to know how it feels from your perspective. We know the limitations of v0.6.0 but we love it and are proud of it.
On the other hand it is a question to you: will you need and want to buy/rent computational resources via the New Golem Network? I believe the answer is simple and is positive.
We can't wait to find out what the traffic will be some time after we launch mainnet and developers start deploying apps.
This is the first Google result for "shopping cart abandonment".
>The shopping cart abandonment rate is an important metric for e-commerce sites to keep track of because a high abandonment rate could signal a poor user experience or broken sales funnel. Reducing shopping cart abandonment leads directly to more sales and revenue, so optimizing the checkout flow a core area of focus for many online retailers.
It sounds like it just messes with the campaign's financial tracking of consumer data trends for merchandise. A public campaign to abandon carts would massively overestimate the amount of people who ultimately decide not to purchase an item.
...That's really about it. The campaign can recognize that there's a public campaign going on to screw with them, and just be like "well, that's one metric we can't really track right now".
If anything, it may have an adverse effect of prompting Trump supporters to be like "How dare they! I'm going to actually buy merch to support my president!"
You can either...
A) use Google Analytics to set up an experiment (that's what they call A/B testing). Go to Behaviour -> Experiments.
B) Use Optimizely. I've started using recently and it makes creating the alternative pages a load easier.
There's your choice.... A... or B... heh -_-
My honest opinion is that you should probably start over with a blank slate-- it just doesn't look very professional, isn't optimized for a quick impression, and will be tough to optimize. Because this is an image, I can't tell what this site is running on, but from the looks of it you could easily run a wordpress site. Check out ThemeForest and purchase a nice theme for $40-80 and you'll have MUCH, MUCH better results. Once you get a better design, don't forget to test, test, test. Some good tools-- Optimizely, SurveyGizmo, and Cooptimizers.
> Did they gain traffic?
A/B tests are not designed to test such a thing, so no that's not taken into consideration.
However, it should be pointed out that they gain the same amount of traffic regardless of where they put the "pay with bitcoin" button, so if they make more money per visitor with it hidden away, then it's still in their best interest to (1) "accept" bitcoin through some back channel while (2) making the payment landing page as optimized as possible.
> Why isn't it on the payment page, where one might expect it?
Read the study again. It was on the payment page, and when it was they were making much less money than when it wasn't.
When they moved it off the landing page, they (1) got all the benefits of accepting bitcoin while (2) not having the downside of having too many payment options on the landing page.
Since you seem confused about what an A/B Test is and what it can and can't test, I'd recommend reading up about them.
>Split testing (also referred to as A/B testing or multivariate testing) is a method of conducting controlled, randomized experiments with the goal of improving a website metric, such as clicks, form completions, or purchases. Incoming traffic to the website is distributed between the original (control) and the different variations without any of the visitors knowing that they are part of an experiment. The tester waits for a statistically significant difference in behavior to emerge. The results from each variation are compared to determine which version showed the greatest improvement.
>p.s Kusama is the same as Polkadot
That is factually wrong, Kusama is a Canary of Polkadot
https://www.optimizely.com/optimization-glossary/canary-testing/
Best two are Optimizely (https://www.optimizely.com/) and Visual Website Optimizer (https://vwo.com/). You can split your your traffic and drive users to two totally different pages using just one url.
Since you are running the experiment for a long time, it probably means that the experiment does not have enough traffic and you should probably stop it. Long term experiments could be less reliable.
Here are some links that you can use to calculate how long you need to run your experiments based on traffic and conversion rate.
I think I'd approach it this way; though you'll have to tell me whether you can generate this from the data.
Start with the current baseline average per user. Say that's $10/user/week. Guess at how much you expect this to increase. Say 20%. That's $12/user/week.
What percentage of people currently earn you more than $12/user/week. This is your "conversion rate" for the purpose of this A/B test.
Now to the experiment.
You have two groups: - Users who have increased rewards (Variant) - People who have the baseline rewards (control)
Now to the mathy part. You need to take the conversion rate, and then the change you expect to see. E.g. You expect to see a 20% increase in the number of users who convert by earning more than $12/week. Pop it into something like [this calculator]https://www.optimizely.com/sample-size-calculator/?conversion=7&effect=20&significance=95)
That's how many people you need to run through your experiment to get a good chance of the numbers having statistical confidence. Run your experiment, and then you can pop it into something like this in Excel.
It's not actually too complicated; just lots of spreadsheets.
If you're talking about a new feature or isolated change, it's A/B testing. If you're talking about a whole release, it's a staged (or staggered) rollout.
Optimizely does have a Sample Size Calculator.
More than a few people will find they ended their test ten years too soon.
For people not all that familiar with the issues and inevitable criticisms, a nice article is Most of Your A/B Test Results Are Illusory and That’s Okay.
This goes into experiment methodology in an approachable way. Suffice to say the one gleaming truth you can hold out and convince anybody with is illusory. One convincing argument for A/B split run testing is what people are doing (or propose to do) instead.
Were there such a truth to be seen, you would find people will shut their eyes to avoid seeing it.
It's not bad right now, so if you left it this way it'd still be fine, but I'd probably make it a bit more obvious and maybe the UI like other websites. For example:
That'd be my preference. For UX I'd use an example like Optimizely, though focus on the top part of the page since you don't need all of the explanation below since it's a simple app. Though, it'd be neat if you could load the summary below the search box since you have the extra real estate. Extra points for doing it with AJAX without refreshing the page.
Those are just some tips to make it more aesthetically pleasing. I think the simplicity is the best thing going for it, but if you can make it look good too you'll appeal to a larger audience.
Lastly, not sure if it's possible or not, but reducing the number of pages the algorithm can't summarize would be nice, for me it was about 20-25% (though it was a small sample) that couldn't be summarized.
The thing I think you're looking for is enhanced e-commerce for Google Analytics. However, enhanced e-commerce I think requires a multi-page checkout. If it's possible to integrate GA and MemberMouse, you should be able to contact MemberMouse support to figure out how to integrate that.
You could also try Optimizely but I don't know if that integrates with MM (tired of typing it haha) and again, you should be able to contact their support for assistance with that. Might be possible to test two different checkout pages.
~~Also, tip—when you post on Reddit, this sort of text should be included in the body. This helps visibility, as a lot of users use something called Reddit Enhancement Suite, and can expand the text and decide if they want to answer your question without clicking into the comments. Looks better than just the title, too!~~ Edit: Whoops, I see what you did now. TBH I think cross-posting is best in these situations still, but do you!
What are you hoping "better" means? Are you looking for a hire conversion rate, lower bounce rates, more pleasing experience, nicer design, etc.? (These things are not mutually exclusive necessarily, but it does matter to drive down what you're looking for!)
Speaking completely subjectively, there were items I liked from each design! The new one is very nice to look at, but it feels like it scrolls forever. The older one is a little dated, but I am far more compelled to read the page, personally, as it is far shorter. If I were doing this with no data, I would say you need to find a way to partner these two items. Have, perhaps, the most important benefits listed first (3-5, not 12) and then an option for me to find out more where the rest can come up!
Not that said, this is completely objective and really what you want it to figure out which one accomplishes your task better. You must run some form of experiment, probably best with an A/B testing scenario. Google analytics offers a tools to do this called "Content Experiments" which is free; Optimizely (https://www.optimizely.com/) also has a free plan to do this. The basic idea is that you show randomized versions of the page to random users and then monitor how they react. You then take this data to determine what does - and what does not - work for your situation!
As an aside, I'd highly suggest testing your assumptions with data. Data doesn't lie.
As an example, we thought people would sign-up with the button "Join the Revolution."
Although we are not quite to statistical significance, using Optimizely's free service, our data so shows the text "Go" has a 48% improvement, "OK" has an improvement of 96%, and "Sign-Up" at 113%.
I haven't done one yet, but I have
http://www.powtoon.com/videomarketing/animated-explainer-video/
bookmarked for when I do.
High_Five's looks great too.
> They won't go with freelancer, as they feel they could be scammed.
Sounds like you aren't really doing the marketing... ask for a marketing budget and what they want (20% increase in sales in 3 months, etc) and see if they'll be more hands off/flexible that way?
A/B testing with the cheapest plan from https://www.optimizely.com/ may also be helpful for testing ideas vs. conversions.
I follow a bunch of blogs to make sure I keep up with new developments. These days, I primarily focus on PPC management and conversion rate optimization. For PPC I don't use any special tools. For A/B testing, I use Optimizely, and it's amazing. I can't speak more highly of it. My SEO tasks are mostly making sure our on-site SEO is good and the search engines are indexing our content properly.
Streak does: http://googleappengine.blogspot.com/2012/10/streak-brings-crm-to-inbox-with-google.html
Pretty certain Optimizely does too: https://www.optimizely.com/
Edit: I also have an iPhone app, backed by App Engine (Python), and it recently was #6 in the world in "Top Free Word Games". Here's a tweet, with a screenshot of the traffic, that App Engine handled effortlessly without a hiccup: https://twitter.com/clemesha/status/235467178843967488
Methinks you make need a bit of remedial on Search Engine Optimization, as Pinterest is DEFINITELY the reason Pinterest muddies your search results .
>When was this an arguement?
You started by arguing that they should have one. Should I just copy/paste the comment you made, or are you competent enough at using reddit to find it yourself?
Or ... if you're going to play coy and say you didn't use this exact language, then don't bother replying -- I am not interested.
>You do realize not everyone just buys one thing only right?
Yes. You do realize that not everyone buys multiple things, right? In fact, do you realize that the vast majority of online purchases are for single items?
>And why would it be bad for the customers having a better shopping experience?
Less clicks to make a purchase.
>but do you have something agasint shopping cart, or agasint anyone talking about it?
No, and no.
>Sorry I don't follow is something wrong with having a shopping cart?
My apologies, I thought I was actually talking to someone who has thought about this for two seconds beyond the "No shopping cart" circlejerk.
I guess you don't know about shopping cart abandonment? I guess you don't understand that a shopping cart doesn't become more efficient than not having one unless the customer buys three or more items?
I think EGS should add a shopping cart. However, my initial comment in this thread was to someone that was trying to act like everyone needs a shopping cart because it is 2021. My point was -- and still is -- that a shopping cart is not a necessity for an online store, and that there are actual reasons for not having them.
Next time you want to argue about a topic, maybe you should learn a few basics first.
If you don't have a shopping cart, you actually save a click on any single purchase. With two purchases, you end up with the same number of clicks as a store with a shopping cart. Having a cart only becomes advantageous when you buy three or more things.
Also, shopping cart abandonment is a thing.
https://www.optimizely.com/optimization-glossary/statistical-significance/
This puts it in pretty simple terms. Let’s say you are testing ivermectin vs. placebo with an end target of “faster recovery from covid” or any other endpoint. If there is no statistically significant difference in the length of recovery time between ivermectin and placebo, then there’s no benefit in taking ivermectin over placebo. Just looking at the raw numbers doesn’t prove anything.
Shopping cart abandonment is one. You ever notice how Amazon strongly pushes a "buy it now" slider? You ever notice that Origin does not have a shopping cart, unless they have a big sale going on?
I am not arguing that they should not have one, but there are legitimate reasons not to.
Cool page
Some minor website-y stuff if it's helpful:
The chart cuts off on small screens. On your iframe, you could change width="1300" to width="100%" to fix.
Allowing scrolling of a page and an iframe inside the page makes the site hard to navigate. Not sure of solution, hard for you to know exact height of iframe as things change. Maybe move all your content above it and just let the iframe go?
Too much text up top pushed info user came to see bellow the fold
https://www.optimizely.com/anz/optimization-glossary/above-the-fold/
Are you referring to their sample size calculator? If so, I'm curious to learn what about it might be spreading mis-information?
For me, one thing I have been trying to battle with is people making the following statement
>You got a negative treatment effect because your sample size was small.
Where small is order of few thousands. And then, I point them to online resources for eg AB Test Calculators. So, for me I've been liking the accessibility of some of these resources to make others understand very easily.
I was ready to dismiss your idea because, frankly, your website sucks and these comments are a dumpster fire. After watching your first video, I see what's going on. You've built a product that is ostensibly targeted toward product managers, yet you posted in a subreddit that targets engineers. We aren't your target audience, hence the pushback. Consider posting someplace where this will be seen by product managers.
That said, what you're doing isn't necessarily unique. I recall using Optimizely for A/B tests nearly three years ago. I imagine there are cheaper alternatives. Make sure your offering stands out amongst this competition somehow; otherwise, why would anyone bother using your product?
Back to your website: you know it sucks. Fix it. You've done yourselves a disservice by releasing a site with so much text yet very little information. Also, besides not being informative, the video of the Stripe homepage somewhat implies that your product is used/endorsed by Stripe, which may get you in trouble with Stripe. Consider removing it.
Adjust your pitch to focus on the ease of use for Jill the PM to make changes without involving developers. If you need to use numbers, focus on the low amount of time it takes Jill to launch a new feature. The moment you start talking about a feature taking 16 hours to implement, you offend all the engineers that know we can do it faster. You still need us engineers to add that script tag.
Lastly, stop talking about feature flags. Feature flags are not your competition, and you aren't eliminating them. Your competition is me: the engineer that a PM is asking to implement a feature.
>I'm looking for criticism on my new site.
The web site is a delivery vehicle for information. The information is missing. Navigation is technically fine but unhelpful.
Is this going to be for freshwater fish? Looks that way. Is it asking too much to put Freshwater in the title or do you have SEO hangups about that.
This site looks okay on the very most superficial level. Anything above this remedial level is void. That included the unhelpful About Us. Frankly it as if two people who had never seen an aquarium and certainly never bought a fish built an aquarium stocking site.
Circumventing forty years of experience. That takes skill.
For instance plecos are misunderstood. No and where is there any help for the user in figuring out a wide range of user questions. (No. That's not something to drop in a faq and be done with.) Unanswered questions drive users off site for answers. (It's a whole sites are delivery vehicles for information thing. Don't worry about it.)
Looks good. Unhelpful navigation. Zero information scent. Web credibility design fail. The site seems constructed, not so much with any mistakes about customers and users, but without any idea users or customers exist.
What is information scent? For one thing, asking about information scent is a quick way to empty out a room of web designers applying for a job.
u/2Throwscrewsatit
A/B testing is a method of comparing two versions of a webpage or app against each other to determine which one performs better.
Running an AB test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app, and then collect data about the impact of that change.
How A/B Testing Works
In an A/B test, you take a webpage or app screen and modify it to create a second version of the same page. This change can be as simple as a single headline or button, or be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as the control) and half are shown the modified version of the page (the variation). After collecting data you then analyze it and see which version performed better with the metrics serving as scorecard. If the new version performs better....great! You could then role out the new version to everyone. If the new version does not perform better you go back to the drawing board.
The OPs issue is that you want to be confident in the data and the results before making a final decision. If not you could interpret the findings incorrectly and role out a version that acutally performs worse.
Full explanation could be found here: https://www.optimizely.com/optimization-glossary/ab-testing/
An approximation for Desktop is described by Optimizely blog:
>When determining an average fold placement, most web designers agree that the fold line is at approximately 1,000 pixels wide and 600 pixels tall.
In general, it's not fixed since there are too many device types. If you aggregate the most common screen dimensions and remove os/browser navigation bars you get a decent estimation.
Yeah, i just looked at the comment again and i don't see it either but i have elsewhere on reddit. It's probably reddit and their split testing.
That's not really true. It's not strictly speaking A/B testing, but multivariate testing is a thing and people do test multiple permutations at the same time. One problem with testing one thing at a time, other than it taking ages, is that variables do actually influence each other. Maybe making the button big and red would improve conversions in isolation, but combined with other changes maybe it would reduce them. You'd never know if you just iterated on one thing at a time.
You should learn what the "fold" is. all the important info is below it.
https://www.optimizely.com/optimization-glossary/below-the-fold/
> Why didn't you go with the Original tittle of the article OP?
A lot of websites do A/B testing for article titles, which means titles can change. This is slightly annoying for us mods because of our rule requiring post titles to mirror article titles. I did check the title when this was first posted and it did match.
https://www.optimizely.com/optimization-glossary/ab-testing/ tweak features one at a time and measure your conversion rate to see if it improves or not. In theory you end up with a much better performing shop after several tests.
I'm saying before you begin the experiment, you aren't sure what the increase in views is going to be from the new exposure point, so you don't know how conversion rate will play in to the math.
Basically with this tool again, here the min detectable effect is in terms of change in conversion rate, but I'd like to have a min detectable effect in terms of xthousand conversions.
Effectively to decide if the new exposure point is worth it in terms of raw conversions, conversion rate aside (you probably expect two UI exposure points to a feature won't double conversions, but it may add meaningful marginal conversions at a lower combined conversion rate)
Totally sounds correct to me! I use
https://www.optimizely.com/sample-size-calculator/
for any test, and I would guess about 6 out of 7 times, I do not have enough traffic, enough conversion rate, big enough difference in performance, and/or enough time for the test to actually prove a real difference.
> So clicking to a new page/navigation element is acceptance.
My understanding is that this is incorrect. However, it is implemented by some tools like OneTrust on https://www.optimizely.com, where clicking the close button on the banner (X) rather than 'Accept' is still taken as affirmative consent for all cookies (although I checked the breakdown before closing the banner and ensured all but necessary cookies were disabled). I think this is disingenuous and a reasonable person would assume (X) is equivalent to disagreeing.
You need to look at the sample size of the test too, not just the p value.
Here's a calc https://www.optimizely.com/sample-size-calculator/?conversion=5&effect=14&significance=95.
It says you're missing about 10-20k visitors per bucket.
Not meeting minimum sample size inflates the error rate.
If possible you should continue testing.
Don't fall in the trap of looking at just the p value. Your test didn't hit the minimum sample size for opens or clicks.
You can use something like https://www.optimizely.com/sample-size-calculator/?conversion=5&effect=14&significance=95. It calcs the minimum sample size (per buxket) for two tailed tests only. You're still missing 10 -20k visitors per bucket depending on if you're looking at clicks or opens.
Not hitting the sample size overinflates the error rate.
The feature is finalized, finished with implementation and bug tested. It's just a matter if the feature itself is better for the business which is where an AB test helps with
Entire companies are built just to provide AB test tools. Look at the customers that Optimizely has. There is no denying the benefits of AB testing FINISHED product features
TP,
Without knowing how much you spent on building that traffic, traffic-to-sale conversation rate is actually not that bad.
Here're a few suggestions:
Build a "Newletter" pop up on the homepage to collect email addresses, so you can attempt to get them to revisit your website with a email blast.
Look into https://www.leadpages.net. When you send promotions, you can deploy custom websites to track how well a specific campaign is doing after you post an ad.
You can also try using https://www.optimizely.com/ to help with AB testing to help find what messaging works with certain audiences.
Let me know if need assistance in configuring marketing campaigns.
Completely dependent on budget and requirements but have you looked at Optimizely X full stack?
https://www.optimizely.com/products/developers/
Can do a lot more beyond just web experimentation and it's aimed more at developers as you need to utilise SDKs.
Not sure if personalisation falls into that too or is an additional cost.
There are few other data points I would like to know before making any comment on how to approach this.
1) What is the sample size ( Number of visitors and Number of conversions) in both tests?
This will give a better picture. For now, I suggest you to use a AB testing significance tool to determine the P-value of your test.
This is because I have seen many marketers just going by plain top level numbers to decide whether a test is successful and over.
Here are few tools to measure the significance:
1) VWO: https://vwo.com/ab-split-test-significance-calculator/
2) Optimizely - https://www.optimizely.com/resources/sample-size-calculator/
Hope it helps :)
PS: If you think you can share numbers, I can help you with it more.
I find these sites very helpful in getting you introduced to A/B testing, sorry for the late reply I overlooked your comment.
If you're looking into this subject I would look onto these 5 sites for more information about the different value they're offering. I would advise you to use a tools while A/B testing because they make the job so much easier :)
These are my personal favorites:
Truth is, there is no guide, just strategies that work sometimes. Consider first talking with people who will PAY. Don't sell them the idea or talk about it, but ask how their world looks like without the existence of your potential business. Example: Selling tires. Don't go like "I'll be selling tires, you have a car, would you pay for tires?" But rather like "How often do you buy tires?, How do you proceed? Where do you store them? .." keep it informal and uninterested. Get specific answers and don't hunt for compliments and I think this could get you going.
Now for the website, set up a lot of websites (like 6) and do some multivariate testing. It is for the moment the only "scientific" method I've used to measure an ideas potential without leaving my couch. Maybe check this out https://www.optimizely.com/resources/multivariate-test-vs-ab-test/, you can send me a private message anytime
Optimizely is pretty good if you want low cost/free for your level of usage: https://www.optimizely.com/plans/
Beyond that, Visual Website Optimizer (starts @$49/mo) is probably the best tool out there without going to the 'enterprisey' level.
Optimizely experiments - Free plan goes a long way
Material Design for Bootstrap - Pleasant evolution of responsive elements
While we all have ideas on what is effective, your best bet is to be testing.
Google Analytics offers a completely free A/B testing tool called 'Experiments'. You can have traffic split into two and see which page of yours is most effective.
Here is some more info on it: https://www.optimizely.com/ab-testing/
The site is SEO and metric'd out the wazoo, which befits someone in marketing. I mean, it adds your UA string to the <html>
tag.
The whole site looks like it was doing through building blocks provided by https://www.optimizely.com/
So you A/B test every design? Post some of your data. Write us a short/small case study showing how design impacts conversion and overall business goals. I'd love to see it.
Also be sure to plug your numbers into something like optimizely sample size calculator and show your sample size is enough to make a judgement call.
Send me a PM when you do and I'll give you reddit gold for it. I have never met anyone who actually included a/b test as part of the design process for every design. The data you have must be amazing!
People hate paywalls. But your model is very similar to a subscription model. People seem to love that model.
I'd just change your marketing point of view, and make any technical changes necessary to move to a subscription model. You could even go tiered if you wanted (e.g. lowest tier has only access to a stuff a day old/archives, the next tier has access to the daily content and archives).
This model is perfect because of your frequently updated content.
If I remember the stats correctly, your conversion rate is actually pretty high for an premium content site. If you want to improve it further though, re-evaluate your funnels.
What does a new user see when they visit the homepage? What calls to action to sign up/subscribe are there? How is the create an account for user experience? How is the subscription information form user experience?
There are examples like this everywhere. One change to a critical part of your site can make a huge difference. I recommend looking into A/B and Multivariate Testing. Optimizely is the most popular automated online A/B Testing tool.
Short answer is no. Basically below 30 means it's not possible to be accurate, not that 30 is enough. Use a sample size calculator like https://www.optimizely.com/resources/sample-size-calculator/ - don't overestimate your effect, 10% is a lot - say you have a 30% open rate and you're looking for a 10% improvement (to 33%), you'd need 2,667 per variant to detect a 10% uplift. Below that and if you see a 10% uplift it's attributable to random chance.
Website conversion rates take TONS of visitors to detect an effect because conversion rates are lower, check it with a 5% conversion rate looking for a 5% uplift.
Thanks for the explanation. How is your service integrated with my cart? Does it matter if my cart is built with PHP, ASP, Java, Ruby etc? Is your service integrated primarily with JavaScript?
Some of what you've described sounds a lot like split testing: "structure and test content on their site to segments of traffic". Is your service any different than these:
I'm not sure how the "sticky cart" ties into the segmented content from the paragraph before it. Are these two different features provided by your service?
Most importantly, do you have a website I can check out for details regarding your service? And do you have a demo I can take a look at?
>I decided to create Split Optimizer to fill a gap (I think) in A/B testing services with low, affordable costs for small businesses and bootstrappers (like me).
Optimizely might differ on this.
Where -- oh where -- is the split run test on FREE versus legitimate, genuine, value?
I'd drop the slider at the head of the page and just use the phrase 'Remote Viewing'. You need to tell visitors what you are selling the moment they hit the page. They are either interested or not, if not then you want them off your site, but if they are then you need to be super clear about what you are offering. In fact, I'd go one step further. I'd sign up for https://www.optimizely.com/ and use a/b testing to test the 'Remote Viewing' phrase to death. When I hit on the phrase with the best engagement, I'd start building in the rest of the page elements.
Not a stupid question. Here's a pretty good explanation of what it is: https://www.optimizely.com/ab-testing
A/B testing allows you to run an experiment to answer questions such as: "Are site visitors more likely to click on a red button, or a green button?" The A/B testing software would show the red button (the control) to 50% of the visitors, and the green button (the variation) to 50% of the visitors. It tracks which version is clicked on most, and after testing against a statistically significant number of visitors, a winner is declared. In this case, the green button wins, so you conclude the test, and hard-code your website to always use the green button because it results in more clicks.
This is obviously very simplified, but hopefully it gets across the gist of it.
The book teaches you how you you can run these sorts of tests, and how you would identify what to test in the first place (randomly testing button colors is not productive). It also includes some examples, and tips from well known industry experts.
Hope that helps!
Thanks for replying - sorry I don't have a very sophisticated understanding of statistics. I've been looking into how long we should run our tests and I've mostly been basing it on when we get a sample size that is large enough to detect the change we want to detect. I'm using this calculator: https://www.optimizely.com/resources/sample-size-calculator
If our baseline conversion rate is 2%, and we want to detect a .1% change in conversion (meaning the minimum detectable effect is set to 5%), the sample size needed for each variation is 244,355 with 80% statistical power and 95% statistical significance.
There is an option for selecting one-tailed or two-tailed test, and two-tailed test increases the required sample size...but if Optimizely sets up one-tailed tests, calculating a sample size for a two-tailed tests doesn't make sense, does it?
I was under the impression that a one-tailed test will only detect change in one direction. So if we want to implement a feature, but first want to test it to make sure it's not hurting conversion, a one-tailed test wouldn't give us this information, correct?
Have you tried https://www.optimizely.com/?
I think this will help reduce your testing.
Also--100?
Maybe start with 2-3 that have one significant change between them and A/B test them. Rinse and repeat until you have a good landing page.
I try to completely utilize analytics to improve certain aspects of my design, such as the CTR on an important button or something like that. With that being said, when I am doing design level optimizations I almost exclusively use Optimizely. All of my information gathered from Google Analytics shape my SEO practices on the development and content side of a site.