Experimenting with digital marketing at your startup
Jeremy Kagan, Professor of Digital Marketing Strategy at Columbia Business School, on running the first marketing experiments at your startup
I’m very excited to have Jeremy Kagan with us today to talk about digital marketing strategy. Jeremy has been a Professor at Columbia Business School for over a decade teaching coursework in Digital Marketing, Strategy and Innovation, and Entrepreneurship. In addition to teaching, Jeremy acts as a Venture Scout for Mighty Capital and an Advisor to Pointforward Accelerator. Prior to his career in academia, Jeremy was the founder of PricingEngine.com, a SaaS marketing technology company helping small businesses manage search marketing.
What you’ll learn:
The three different marketing strategies
How to architect your first marketing experiments
Tools and resources to use along the way
CB: For a founder who’s starting from scratch on the marketing front with their startup, how do you like to classify the various strategy options?
JK: When I’m teaching digital marketing I always like to approach it from the perspective of someone who’s doing this for themselves, probably for the first time, and in the context of a young startup. There are three ways you can do digital marketing: outbound, which is primarily advertising in the early days, inbound, which Hubspot made very popular with their content strategies to move people down the funnel, and engagement marketing, which is designed to keep customers aware of your brand.
I like to start companies off with a basic ad campaign. When you’re a young company and you’re in an accelerator program, you probably get some decent web traffic to your website, but that doesn’t mean those visitors are potential customers for your business. The challenge a lot of founders have is they end up with a long list of emails but they still need to match those with real customers, but you don’t really know what those customers look like yet. A good way to hack this together in a quick and dirty fashion is with search. Search is people looking for stuff. In my opinion, it’s one of the best channels ever invented because people are looking for solutions and Google is trying to find that for them as quickly as possible. So if you step back and think about the broad strokes problem you’re solving, and then you can put an ad up against popular searches that relate to your solution. Let’s say you’ve designed a solar power campfire stove. Well, people might not know that technology exists, but they might be searching for things like eco-friendly camping, charcoal grill, etc. Now the great thing about search is that you are just targeting based on their interest, rather than based on demographics, like in traditional marketing. This could prevent you from inserting unintentional biases into your campaigns.
CB: I love the idea that you can try to find people who are already searching for you. How do you recommend founders think about budgeting for their first search-based campaign?
JK: I recommend working backward using a data-driven approach. For your first experiment, you’re trying to get to some confidence level that indicates you know how to target buyers. More buyers are always going to give you a higher confidence level, but let’s say to start you want to see 100 conversions off of that experiment, and that will indicate initial success. The more things you want to be sure of, the bigger your data set is going to have to be. If you’re going to pay by the click, and similar stores are getting 5% conversion rates, now you know that your budget is going to have to be 20x the number of customers you need. So in this case, 20 * 100 means you’ll need a budget of at least $2,000.
With search, you’re going after the low-hanging fruit, and you pay a fair market price for those leads due to the reverse dutch auction pricing mechanisms that Google uses, it’s market-driven. Then you can build off of your conversion rate to understand how much you can afford to bid, which you can extrapolate into your customer acquisition cost for that specific channel. That serves as a firm data point you can use to compare the cost and value of all other customer acquisition channels. Also, when you identify the searches people are conducting that lead to purchases of your product, it will inform your addressable market through that channel. For example, there’s a fixed number of people searching for “eco-friendly camping” each week, but it should tell you how many customers, and how much revenue, you can drive through that channel in the near term.
CB: Let’s say your product is further out on the innovation curve, or they’re creating an entirely new category of product. Does that change how you run the experiment as a marketer? Are there methodologies specifically for those types of products?
JK: Although the product itself might be very innovative, there are almost certainly things people are searching for that relate to the value you provide. There are always analogies you can make that relate back to the consumer’s current behaviors and habits. For example, Uber was game-changing as a method of delivering efficient and cost-effective transportation to users, and although it was a new concept people were familiar with the concept of ordering a black car and hailing a taxi.
There are two primary methodologies I’ve had success with, 1) the problem you solve, and 2) people already existing in the market. For the former, you have to get in the mind of consumers and understand what they’re doing currently to solve the problem you focus on. For the latter, you have to find tangential solutions that relate to the problem you’re trying to solve. You can then try to grab the interest of people who are already using that product successfully and convince them to try your product. Your analogy won’t be perfect from the beginning, but when you do start to acquire customers they’ll help inform where the new market is. In unique cases where a startup has a lot of capital, you can spend that cash to create the new market via extensive market education, but again that’s a rare situation. In the beginning, it’s all about connecting with people and getting their feedback to help articulate your value proposition.
CB: Obviously results will vary, but are there best practices you recommend to set an experiment up for a successful outcome? What do you deem to be a successful experiment?
JK: The nature of experimentation is that some will succeed and some will fail, but going into it you have to set some kind of hypothesis that you will base that particular experiment on. One mistake I see frequently is cutting experiments too short. If you don’t run an experiment long enough you won’t get clear data, which leads to a series of other problems. For example, you won’t know if you should run that same experiment again, but with slightly different parameters. That’s a really bad outcome for an experiment, and if it’s not successful you will have spent money and learned nothing. There are some basic rules I like to follow: 1) focus on something small that can be improved, like changing the focus of your messaging, or changing a button’s color or placement - err towards simplicity in the early days, 2) write down a clear hypothesis for the experiment, 3) have very clear metrics for success like x% boost in conversion, 4) think of things in bang for your buck terms, ie- is the potential increase in performance worth the cost of the experiment and making that change, 5) use a champion vs. challenger A/B test methodology, where you compare the performance of your existing setup against the new one and the winner lives on. Think of it as a high school science project: hypothesis, expectation, and the experiment that either proves or disproves this. Remember, a failed experiment is also a good outcome - you know from the data that the approach you tested does not work, so you can focus elsewhere.
CB: In our examples so far we’ve been speaking in the context of consumer products. How do your approach and metrics change if you’re in the enterprise software world?
JK: The enterprise comp to the above example would be something like a lead generation form, where it’s still fairly low friction and can be opted into impulsively. It’s important that you have some kind of value to offer the customer in exchange for them filling out that form, like a case-study or buyers guide for the specific type of software you sell. If I’m in the market for that kind of software there’s a decent chance that I’m going to fill out a lead gen form to get the buyers guide - this can lead to pretty high conversions. Though I should mention in this case, the goal is not to convert on the purchase of a product, but rather you’re trying to qualify leads that will usually be passed off to the sales team. You start by casting a wide net, and narrow down the definition of a qualified lead over time as you gather more information based on sales outcomes. Eventually, you can identify non-buyer leads that you filter out and do not pass over to sales, so you might be sending them a lower quantity of leads, but they’re more likely to convert into customers.
CB: Are there tools you recommend founders use early on to help get started? I feel like there are so many marketing tech tools out there, where do you see great value early on for low cost?
JK: There are a number of free tools when spinning up your first search campaigns. Google Ad Planner is great for figuring out your market and budget early on. Google Trends is helpful for seeing how many searches are being conducted for products and services that could compete with what you’re offering. Pinterest and Taboola also have trend tools that you can use. Once you have enough data coming through your website you can use those customer profiles and emails to create “lookalike audiences” on through Facebook. The real power comes from being able to target people better and better and make sure your messages and how it’s presented is most likely to get their attention in the first place.
Best way to reach Jeremy: