These Are the Formulas 5 Major Publishers Use to Grade Their Content

By Sam Petulla September 10th, 2014

In our latest Contently Labs piece, we answer a question we hear often from current and prospective brand publishers. Today’s question: What are some formulas I can use to calculate the success of my content?

Content has been battered around by bad metrics for years, the result of an advertising industry focused on maximizing impressions rather than building relationships between consumers and businesses.

That’s changing. A new crop of “content scores” and “blended metrics” are throwing off the shackles of oppressive metrics. These score-based approaches work by combining multiple data features. Sometimes that means weighing multiple metrics, digging into different data sets or considering sometimes dozens of minute inputs to generate a measurement.

This is an encouraging sign. As other industries have discovered, sometimes the way to gauge success is to get outside the normal to get the right angle on things. The Economist uses a “Big Mac Index” to understand the global economy. And men’s underwear sales are used as an economic growth indicator (go figure). So while publishers are just starting to measure the available signals to understand content, we’re excited where this might go.

Here’s what some of the leaders of the pack are measuring today.


LinkedIn released their Content Marketing Score this spring with the intention of improving their platform for marketers who use content and could use some extra insights into how well their efforts succeed on their platform.

LinkedIn breaks their Content Marketing Score into three parts (and an e-book goes deep into it). The score does the following:

1. Quantifies your impact by measuring your engagement with your audience.

2. Benchmarks your performance versus your peer set.

3. Provides recommendations for improvement.

The score resembles the below image. Users are assigned a numeric score based on engagement, and that’s used to calculate the total potential audience. Finally, an impact score is provided.

LinkedIn’s Marketing Solutions team explained the components of the score in a kick-off blog post:

“It measures member engagement with your Sponsored Updates, Company Pages, LinkedIn Groups, employee updates, and Influencer posts (if applicable). It then gives you a single score, ranked against your competitive set. You will also get recommendations about how to improve your score based on different levers you can pull to give you more reach, frequency and engagement. You can filter your score by region, seniority, company size, job function, and industry.”

The idea, of course, is that LinkedIn is a walled garden in terms of analytics measurement. You can’t hook Google Analytics up to your LinkedIn Influencer posts (yet). Luckily, LinkedIn has built a robust alternative.


The current leader in content score measurements is Moz. Moz calls their score the “One Metric” and outlined their approach in a blog post, which we’ve broken down into key pieces below:

The crux of the “One Metric” is that Moz compares a piece of content to the content that came before it and judges it accordingly. Moz writes: “We made it by combining several other metrics, or ‘ingredients,’ that fall into three equally weighted categories”:

  1. Google Analytics
  2. On-page (in-house) metrics
  3. Social metrics

The Moz score output looks like this:

Moz calculates the score by averaging a host of metrics. For example, they look at social and total the shares of all of their content over the last two months. (Note: you can alter the number of months you want to measure). Moz then uses that as their benchmark. “Then, if our next post gets more than that expected number, we can safely say that it did well by our own standards. The actual number of tweets doesn’t really matter in this sense—it’s about moving up and to the right, striving to continually improve our work.”

The below graphic shows how Moz calculates their content score:

Once Moz has those score numbers, they apply them to a logarithmic scale, which normalizes them into a single One Metric that can be used as a score. The log scale reduces the effects of extreme values and looks like the following, with expected performance on the bottom and score on the left:

It may sound highly complex—maybe even too complex—but it’s not. With the functionality of Excel, you can easily set up this formula, plug in your analytics and social figures, and then get a score output, which is exactly how the “One Metric” is computed by Moz. Here’s a template to save you some time.


Sharethrough has a “Content Quality Score” for their ad exchange, similar to Google’s Ad Quality Score, which measures the effectiveness and relevancy of ads and uses that score to determine their cost.

Neither Sharethrough nor Google fully lifts the curtain on their quality scores, but we are able to learn enough to know what they are looking for in ad content. Sharethrough says that unlike Google, their score algorithm “focuses on signals like sentiment analysis and social sharing data to understand innate value of the content itself.” Google, on the other hand, looks primarily at referral data, keyword relevancy, and the landing page for an ad.

Sharethrough’s Content Quality Score is featured in their dashboard and looks like this:


BuzzFeed is known for obsessing over social and zeroing in on the science of virality as if content were a petri dish and their offices a lab. The main measurement they use for the outputs of their experiments is “Lift.” Lift measures how much social media sharing contributes to the impressions earned by an ad.

Social lift might be BuzzFeed’s main squeeze, but it’s not the only metric in town. In fact, in multiple interviews, including here with us at Contently, BuzzFeed’s elaborated that they use a variety of content measurements when determining success. “There’s lots of different things you can look at for different sorts of problems,” Ky Harlin, BuzzFeed’s lead data scientist, said at the Contently Summit. “Using multiple [sets of data] is really the best way to understand it best.”

Moreover, Dao Nguyen, vice president of growth at data science at BuzzFeed, told the Columbia Journalism Review that BuzzFeed uses components of the engaged time metric—despite BuzzFeed eschewing time on page as a content measurement. According to Nguyen, BuzzFeed will look at “what percentage of the page the reader has attained in addition to social metrics.”

Harlin elaborated on that metric measurement in a separate interview with us, saying, “We treat each individual item in a list almost like its own article. So we’ll try to really figure out what people are engaging with and turn a list of 45 items to a list of 25 items without the duds, reordered to make it most likely to share.”


Kapost’s “Content Score” is designed to measure engagement around qualified leads. The score assumes the following funnel path:

Kapost gives the first and last touch in a content funnel the most weight—say, 60 percent of the total score, divided in two. The remaining 40 percent is divvied up between the middle content assets.

Kapost proceeds not by measuring any one particular interaction, but by simply looking in the aggregate about which content users touch the most prior to purchase. Over a month, all the times content touched a consumer prior to a lead are added, with the additional weights applied to content responsible for the first and last points of contact. Everything is tallied up in the end. To see the Kapost Score in an example, see their blog.

It’s still early days, and expect to be hearing a lot more about content scores in the months to come. Are there any we missed? Let us know @Contently. We’ll owe you a retweet.

Image by Johan Swanepoel
Tags: , , ,