Growth Marketing with CXL - Week 8: My Approach to Research and Testing In The Past

Boolsis

https://img.particlenews.com/image.php?url=4ScK0o_0blKfOFe00
Photo by Dan Dimmock on Unsplash

Research and testing is a vital part of any conversion cycle, as discussed by Peep Laja, founder of CXL.

Framing This Week

In this week’s article, I wanted to take a moment to relate it to my professional experiences since my company’s website, and my products to some extent, are still pretty new so I haven’t reached a point where I would like to start testing. Of course, I’m hoping that will change in the next couple of months! So, if you’d like to follow along, please check out 8ight5ive Games’ website and sign up to be a part of our mailing list!

Testing in Video Games

Anyway, this week Peep went over a basic discussion of research and testing methodologies. As I cannot apply that to my personal business, I wanted to discuss it in terms of my professional career in the video games industry.

I’ve spent the past 12+ years working in the video games industry, with a large portion of it in mobile. If anyone is familiar with the mobile video games industry, you will know that things move fast. So, testing is just a part of life in the field. One day you are testing various sales. The next day, you are testing various features.

Speed is pretty important, but so is relevancy. As Peep starts off, optimizing the optimization process is work in and of itself. Not only do you have to know what and how to test, you have to get better at knowing how to find what to test. Testing irrelevant changes or changes that are not as impactful will be a waste of your time. I know from personal experience.

In the past, we have tested a lot of things from aesthetics such as font, button colors, and asset colors. Sometimes, they move the needle. Sometimes, they are a waste of time and resources. Knowing what tests to prioritize and why is super important.

More Speed Equals More Tests

I mentioned earlier that the mobile video games industry moves fast. That means tests should also move just as fast. However, Peep also argues against moving too fast with tests. In fact, you want to generate great ideas and hypotheses to test, but you don’t want to run through your tests too fast. He actually argues for a minimum of two weeks to reduce the chances of having an off week that may produce incorrect results.

Similarly in the mobile industry, there is a sweet spot to how fast you can move with tests so that you obtain the right amount of data but not pull in too many factors that make your test(s) irrelevant or messy. You want the pull in enough volume to produce a solid sample size, but you also don’t want to stretch your tests for so long that you now start seeing random data muddying up your tests.

With regards to timing, I cannot say that I have stuck religiously to Peep’s two week minimum. Sometimes, developer resources are tight so we have to move fast and work off what data we can pull. In the past, we have had tests that ran for a week. Afterwards, we had to make calls on whether or not these tests were successful based on various KPIs.

I will admit, sometimes we did push tests further if we felt the results were just not significant enough or the sample size was too small. Each test was a case by case basis since we wanted to make sure we were producing results for management, but also not wasting precious developer resources.

While Peep discusses his optimization strategies for websites, a lot of his points are totally relevant to product testing. As he mentioned, you cannot implement too many tactics at once. Otherwise, you end up with a crazy web page that has too many distractions. Similarly, if you’re testing anything within a video game product, especially a live one, you must ensure your tests are not impacting one another and thus producing suboptimal results.

Normally, you would not test multiple sales to the same audiences. These would be broken up into various segments or test groups to ensure you are getting the right results you need. Even then, you should ensure that the tests are similar enough to be run concurrently. You would not run a 50% sale to one group while the second group gets a sale testing to see if a blue button encourages click throughs versus a green button.

Properly Prioritizing Priority

Working in product, you normally are the owner of a roadmap and thus are responsible for determining, whether together with other stakeholders or individually, the work to be done in the months to come. Your decision(s) impact several cycles of work. So, if you or a team member starts asking what to test next, then you have a problem with your optimization process.

Luckily, I have yet to have that problem, not because I am so good at figuring out what to test. Far from it. It is normally due to having too many things to test! Between your own hypotheses, ideas from upper management, theories from your colleagues, and random issues or questions that arise during development which require answers, you normally end up with a never ending list of problems to solve. As such, it has been crucial to stay on top of prioritization.

Peep discusses his idea of a good optimization process, which I can say I have implemented to some degree as well. In his process, he judges a process by whether or not it tells you (1) where the problems are, (2) what the problems are, (3) why they are problems to begin with, (4) turns those problems into hypotheses, and (5) properly prioritizes the tests.

In my experience, I have found that the first two criteria in his list has never been an issue. Problems exist whether you want them to or not. Knowing why they are problems can become tricky if you do not pay attention. Then, figuring out how to frame the problem into a hypothesis and properly testing it is another feat in and of itself. Sometimes this last part can be a lot of fun.

Instrumentation is Key

I have never really run analysis on a website, so that will be a new one for me when the time comes, as it relates to my company’s website. However, instrumentation in video games is no stranger.

I have always been of the mind to instrument everything at the get go. Every click, every link, every page, every action that a player can do should be tracked and logged. I am actually really relieved that Peep agrees with this tactic. The only way to know what your users do that will increase the likelihood of conversion is to track what they are doing to begin with. Once you know what they are doing and what actions typically lead to conversion, then you can optimize your conversion funnel.

Of course, the ability to track a large set of data is nothing if you cannot interpret it properly. So that too is also crucial to your conversion research. In fact, this is normally the make it or break it discussion when requesting the BI or developers to start tracking everything. And I get it. It is a lot of work for both teams which can eventually be what saves the product or ends up being a lot of work that becomes irrelevant.

Progressing Through Your Tests

So, now that we have been able to establish a list of hypotheses to test, are able to instrument every action a player or user can take, can prioritize the seemingly endless list of tests...what now? How do you determine when you have successfully completed a test and can move on to the next?

Peep’s lesson reviews the “end” of a test as it relates to websites. Again, not something I am familiar with so I will relate it to my experience in the product world. Normally, we will declare a test “complete” if we have reached statistical significance with a large enough sample size. With regards to mobile games, if you typically receive enough installs or daily active users to justify your sample size then you will be in pretty good shape. If that is not the case, then what we have done in the past is correlate our tests to campaigns run by the UA team (if the test will impact the beginner’s funnel).

Of course, we have had tests that we had to eventually end as insignificant no matter the sample size or statistical results. It happens. Sometimes you simply end up with hypotheses that do not work out no matter how much testing you throw at it. In those times, it is much more beneficial to re-evaluate and restructure your test or move onto another one if you are short on time. In either case, I have always opted for progress to ensure you are hitting your business KPIs. There is no point in spending weeks on a test that feels like it is just using up precious time and resources. The discussion you end up having later about those tests normally are not productive ones.

In Closing

Having spent a little more than eight weeks now with CXL, I have to admit that I spend a portion of the time learning things I feel may not be relevant to my specific use cases. Still, a lot of the information is really beneficial if I apply it to my professional career.

As I mentioned earlier, my personal business, 8ight5ive Games LLC, is still in its very early stages so I have yet to apply a lot of what I have learned with this Growth Marketing minidegree to use. However, that does not take away from the fact that a lot of the information gained in this curriculum is entirely useful. If you have the means or the time to pursue a minidegree from CXL, I would highly recommend it!

Comments / 0

Published by

I write about things that interest me in hopes that it also interests you.

Redwood City, CA
59 followers

More from Boolsis

Comments / 0