22 Jan 2014

How Do You Measure the United Marketing-Product Team’s Results?

(This is the fourth in a series of posts on the lean approach to business, whose focus is on better understanding prospects and customers by extensive research and accordingly achieving higher customer satisfaction. This approach is very timely, because people increasingly start shopping by asking for referrals rather than searching, as described in Facebook is replacing Google. Accordingly, it makes sense for product and marketing to merge into one team, so that the company can have the best success at selling things of value to customers, at a profit. The idea is that market research is the key input or food for thought for both the marketing and product teams, and that selling and product development can’t be dissociated. If you haven’t read any of the previous posts, you’ll likely benefit from checking them out first.)


Benchmarks are how you gauge if the systems are running well or poorly. This is how you hold a united M&P team accountable.

Measuring your performance is vital not just on the road, but in business too. This article shares some useful metrics for a united marketing-product team.

Measuring your performance is vital not just on the road, but in business too. This article shares some useful metrics and elaborates on the concept of benchmarks, for a united marketing-product team. Photo: DC Visual artist Joseph Nicolia


Benchmarks are the standard minimum targets for your key performance measures.

When used properly, they can tell you if you’re selling a desirable product at a profit. They need to be objective and specific. You also want the benchmarks to be attainable, relevant and time-bound of course (SMART goals), so that you can define success or failure clearly.

Some examples of good metrics  and their associated benchmarks follow (format is metric – benchmark) .


Sales metrics and benchmarks
  • Profit/month – Make $2000 profit this month.
  • Profit/sale – Make $100 profit/sale
  • Sales/week – Close 5 sales /week. (Over ~4 weeks/month, this gets 20 sales, times $100/profit = $2000 profit/month)
  • Leads/week – Generate 50 leads/week.
  • Split tests/week – Run 3 split tests / week.
  • Copy creation – Write 3 headlines, design 3 hero graphics and write 3 calls-to-action.

User metrics from UserCycle, a product from RunningLean's Ash Maurya that gives you a lot of measurement and learning capabilities out of the box, so that you can measure customer satisfaction and act to improve it.

User metrics from UserCycle, a product from RunningLean’s Ash Maurya that gives you a lot of measurement and learning capabilities out of the box, so that you can measure customer satisfaction and act to improve it.

Customer satisfaction metrics and benchmarks
  • Referrals/month – Get 50 referred leads / month
  • Referrals requested/customer – Ask for 3 referrals/customer
  • Customer satisfaction ratings (0-5 stars) – Rate 4.3/5 stars, average on all customer satisfaction metrics
  • What if we dissappeared (% very dissapointed) – Get at least 40% of customers to be “very dissapointed” if your product was taken away from them (idea via Ash Maurya)
  • Customer satisfaction surveys / month – Survey 100 customers on their satisfaction, this month

Market research / customer development metrics and benchmarks
  • Surveys/week – Complete 15 problem-ranking surveys/week
  • Surveys requested/week – Ask for 20 surveys to be done
  • Interviews/week – Interview 7 prospective customers/week
  • Interviews requested/week – Request 20 interviews over Facebook, from friends’ referring friends etc

How the metrics and benchmarks inform your weekly/monthly adjustments

The metrics measure your performance towards achieving the bottomline financial goal. The benchmarks are comparison points to instantly tell if the measurements taken are good or bad. If it’s Thursday and you were meant to close five sales this week but you’ve closed only one, then you know based on your benchmarks that you’re performing below standard.

The conclusions to be drawn from the metrics are conclusions about how well the system performs, since a company should get identical results no matter who’s running the systems (assuming the staff have the minimum qualifications).


How do you set benchmarks in the first place?

Obviously, when you’ve just created a system, you don’t know how well it’s going to perform. So rather than come up with a meaningless initial benchmark, run the system under standard conditions (because real life conditions are rarely optimal) and measure your results. That first readout is your benchmark. Alternatively, you can also run the system a few times (e.g. for a few weeks or months) and then look at the average measurement in order to set your benchmark.


Finally: do systems remain the same forever then? What about improvement?

No, systems don’t remain the same forever. The benchmarks are there as initial guides. Once you’ve done all the market research, campaigning and product delivery, then measured your performance and compared against goals, it’s time to look back and see what you can improve.

With experience and regular measurement, you’ll find the systems’ weak points easily as they’ll be apparent. You then get to brainstorm how to improve them, automate repetitive tasks, increase satisfaction etc. Accordingly you’ll see yourself eventually beating the performance benchmarks on a regular basis. At that point, you update the system (i.e. “the step-by-step how to” documentation) as well as the benchmarks that define standard performance.