Facilitating growth experiments
I was part of introducing growth experiments at Attest. They are essentially a structured way to test and measure the impact of any growth-related improvements we made to the marketing website.




They could be as complex as creating a new section (eg a hub for educational content) or as small as A/B testing a call-to-action. In essence they all follow a similar format which is close to a lean UX approach.
For example, our homepage bounce rate was north of 55%. A high bounce rate might indicate that potential prospects do not find what there are after. So we set out to understand why users drop out, and therefore, if we should/could make improvements.
Research & user testing
Analytics helped us to spot anomalies in the user flows and heat maps could inform us to a certain extent where users are able to digest information and/or where they were likely to bounce.
With this mind, I conducted user tests to understand how users interact with certain website pages or elements. The main objective of our user tests was to confirm or distill a hypothesis that we were confident to work with.

On the homepage we wanted to found out if users understand what Attest does at a glance, what their expectations were, what triggers them to find out more and what holds them back if they don't.
Here are some general findings*:
- Users are sceptical about stand-alone company logos. They want to see social proof by companies they can resonate with. Numbers spark the most interest.
- Users want clear USP's. They want to instantly understand what makes you different than your competitors.
- Users want to see the product in action. Showing the user interface seems to give an instant impression (and ideally confirmation) on the quality and ease-of-use of the product.
*Note that I only mention general findings here for the sake of brevity. Some of our testing honed into content (eg. understanding the wording of our value proposition and/or USPs) and the overall layout (eg. clarity, use of colour, illustrations, etc).
Benchmarking & hypothesis
An important part of our growth experiments is that we always set a benchmark metric (eg. bounce rate) so that we can measure our impact afterwards.
From our data and/or user tests we distilled opportunities from which we formulated hypotheses that we could work with. Some could be measurable hypotheses, others were more qualitative.

Ideation & prototyping
Depending on the nature of the experiment this stage could involve facilitating a workshop to generate ideas, mapping out user flows, questioning our information architecture, etc. before prototyping.
The level of prototyping heavily depended on the use case. Low-fi static designs help me to work out quick ideas, find the correct hierarchy of information or flow. High-fi (interactive) designs focus more on user interaction and visual feedback. Sometimes we even made changes directly in our build without the need of specific design work or prototyping.

Coding
I often prototype certain elements or pages in code. I find that it adds a layer of interaction or animation that would otherwise be missed with static prototypes. it allows me to get accurate user feedback (through design crits or user tests). It also helps to communicate the intention with developers.
Visit my Codepen for some examples.
Testing & iterations
Finally we monitored our changes over time and see if we were able move the needle. If not, we tried again. Sometimes OKRs change which gave us an incentive to tweak initial experiments or start new ones.
For the homepage I ran another user test (with a different pool of participants) which confirmed that the improvements we made had an immediate effect. Within a few months we were able to reduce the bounce rate to 40%.