Insights Blog How Media and Entertainment Companies Can Monetize Personalized Search With Content Search Testing


How Media and Entertainment Companies Can Monetize Personalized Search With Content Search Testing

Qualitest senior engineers, Deepak Kumar and Ajay Gehani, reveal how content search testing can help media and entertainment companies keep up with digital transformation and the rapid growth in personalized search.

Deepak Kumar, Senior Manager, Engineering and Ajay Gehani, VP, Engineering

Hand with remote control pointing at a wall of multimedia

When done correctly, personalized search leads to monetization opportunities for media and entertainment companies, providing a way for them to tap into potential sales.

As more and more consumers embrace digital media and entertainment, it’s therefore pivotal to understand what customers need from personalized search and how that impacts digital transformation initiatives for CIOs and CDOs.

Top challenges with content search right now

Consumers want media and entertainment brands to treat them in a personalized way. They expect custom offers and recommendations of what to watch, read, play or listen to. But there are signs that customers’ expectations are outpacing their experiences.

Top of the list of complaints is that content search generates the wrong results. This is followed by gripes that results are in the wrong order.

Some chief marketing officers echo these concerns. They also question why content search doesn’t lead to better product sales. The reason? These are all symptoms of a poor-performing search platform, one that fails to display the correct results and product ads.

This emphasizes how much media and entertainment companies rely on their technology teams. It’s also why you need to create the best content search experiences.

Questions your quality engineering teams should be asking

How can quality engineering (QE) teams take a deeper dive in understanding and testing the search systems that cater to these growing consumer needs? Here are the questions that QE and product management teams should be asking in order to solve this conundrum.

  • Are we testing the search platform correctly?
  • What are the key areas that the business is banking on to generate revenue and that we, therefore, need to focus on while testing?
  • How can we be confident that the product will both deliver seamless UX by serving up the most relevant content to the customer and drive digital revenue through captive and in-context product presentation?
  • How do customers view the content presentation when compared to that of competitors?
  • Will the competitive technology landscape dictate customers’ buying, affiliation and loyalty patterns?

“For the things we have to learn before we can do them, we learn by doing them.” – Aristotle.

To help identify where you can further expand your areas of testing focus, let’s take a look at a few key concepts.

Metadata consistency – what does the data say?

Today’s big name media companies handle content from many sources. Making sure that this content shares the correct set of metadata is vital for its distribution and presentation.

This introduces the concept of data enrichment. Using machine learning and natural language processing techniques can help to further enrich the content or the metadata associated with the content. This enrichment of the data plays an important role in making sure that the content has the right set of vocabularies or standards used by the media and entertainment industry globally.

These taxonomies help to create the right solutions and also serve as requirements that can be asserted for best practices by QE teams. Data enrichment and presentation concepts can be further leveraged to present the right ads to customers when searches are recorded and analyzed for patterns.

Storing information structurally and serving it in the fastest way without adding technical and architectural debts all requirements for gaining competitive advantage that should be tested thoroughly – this applies to functional and non-functional requirements.

Understanding the different taxonomies and standards used in the data enrichment process helps to test the consistency of the metadata sitting in your respective search platforms.

Test environment parity and data relevance

A common problem for organizations in their digital journey is not having relevant and contextual data that drives business outcomes in their system development and test environments. Data relevance from a timeline perspective is also of the utmost importance. In addition, market trends, innovation, news and economic conditions change consumer buying patterns and challenge how content is presented, which in turn influences what consumers search for and which products they want to buy.

Take querying the keywords “Trump” or “Biden”, for example, and the different sets of results we’d get in 2022 versus 2019. This shows how heavily we rely on having the right set of contents in the test environment. More importantly, it demonstrates how crucial it is for us to design test cases using data that customers are most interested in. The ideal way to achieve this is to constantly redesign test stories in conjunction with product teams.

Keeping test environments in parity with live production environments can also help to uncover a variety of non-functional issues. For example, establishing a baseline and measuring the performance of search queries against your growing database of contents continuously can help to identify and prevent any bottlenecks with disk I/O operations. This can further help capacity planning and determining the production hardware needed to sustain the average load.

In addition, monetizing on obsolete data and buying patterns won’t lead to the correct results. Instead, product teams should understand trends in the marketplace and work with the technology and QE teams to plan new enhancement stories that seize the opportunity to capture the most revenue through targeted ads.

Understanding content indexing

How important is it to understand indexing of contents or the metadata in your respective search clusters? How important is design in ensuring a seamless customer experience and gaining a competitive advantage?

The answers to these questions are important and the test strategy needs to drive assertion criteria that address these requirements. As mentioned above, content sets can vary and be grouped in many ways, for example, based on different media types like text, photos, videos, audios, graphics and interactives. In order to increase the speed of the presentation and ensure that the data being served is not erroneous or irrelevant to the search results, some best practices around content design and architecture must be implemented.

For example, the data can be grouped based on most frequently searched data versus less frequently searched data; by separating live data from archive data; or by ensuring indexing is segregating data on the basis of metadata requirements. These best practices and indexing rules help to make sure that the right content sets are indexed, or stored, at the right location. They also ensure content delivery to the right distribution and presentation platform(s).

Understanding content indexing plays an important role in validating that the contents are indexed or stored in the right location. This in turn may decide what customers see as part of their searches in the respective presentation platforms.

Having business and technical domain knowledge of content indexing plays a vital role in validating if the contents are indexed or stored in the right location. Not having a breadth of technical and business knowledge in this discipline can lead to search and presentation defects when features are deployed into production. This can erode customer confidence and loyalty.

Relevant quality engineering and test areas

The key takeaway in the world of content search and monetization is to maintain the steady quality of the content with enriched tags, vocabularies and taxonomies, and maintain and capitalize on the relevance of the returned set of search data.

With that in mind, here are some of the key function points to focus on when testing.

  • Keyword(s)-based queries

These validate that the returned result set matches the keyword(s) provided for the search.

  • Exact phrase or sentence-based queries

Like keyword-based searches, these tests ensure that the result sets returned have the exact search phrase or sentence in the content.

  • Ads and content search parity

This is about working with marketing teams to understand and document what type of product ads should be displayed based on customer queries and buying patterns, and then creating a test strategy that addresses these scenarios.

  • Other metadata

Validating the accuracy of all the metadata of the returned results ensures that the result set maintained its course as it went through the enrichment process of taxonomies and classification.

  • Content retention policies and legal and security requirements

Understanding and validating the different retention policies set for your data is important; failing to do so may lead to legal issues. For example, consumer access to promotional content may need to expire after a certain number of days. Likewise, you should avoid infringing licensing agreements for images, while content restricted for domestic distribution only should not be visible to international consumers.

  • Search results’ consistency and data relevancy

Validating that the right quantity of content sets is returned for a given search is key. For example, the content set returned for the search keyword “Super Bowl 2022” before and after the event will be different, as after the event we’re bound to get more results returned.

Another instance would be to validate that the data returned for a specific search criterion is consistent no matter how many times you issue the search or at what time of the year: for example, “Super Bowl 2022 Winner”.

From a monetization perspective, another example may be marketing the most popular ads to search consumers the day after the Super Bowl based on the ads’ target rating points. It’s important to understand business rules and ensure that your test strategy includes detailed assertions to meet those requirements.

  • Sort and filtering

This is another important feature that consumers use to further personalize their search. You should also give validating different content types, or sorts, and filtering criteria the utmost importance in your test planning activities.

  • Typeahead or autosuggest

This is a favorite feature that makes life easier by completing the name of a movie, for example, based on the first few characters you enter as you start typing. Understanding and testing the rules defined for the enrichment of autosuggest libraries and making sure they work in your search engines is a good test.

  • Autocorrect or spellcheck

Like autosuggest, this feature is consumer friendly. It’s also widely used, often as a “Did you mean” feature. Including this in your test strategy maintains the quality of searches by keeping consumers engaged in your respective platforms.

What role does performance play?

As the contents grow in an enterprise search cluster, making sure that searches return results within a few milliseconds consistently is important. So, the non-functional testing aspect plays a vital role here.

Defining the key performance indicators and service-level agreements for an environment where the contents are continuously growing is important to monitor and govern the performance of the search.

Continuous and shift-left performance testing, built within the build pipelines and exercising a focused set of queries simulating a variety of searches, helps to set the performance benchmarks for each build. It also lets you catch any performance bottlenecks early in the development cycle.

Other types of non-functional tests to focus on are:

  • Capacity planning

Capacity planning helps to ensure that the given search infrastructure is provisioned to handle the average expected traffic on a day-to-day basis.

  • Load testing / spike testing

Load and spike testing make sure that your search environment can perform well under the expected load, twice the expected load, four times the expected load, and so on. Spike testing simulates an event-based burst in traffic to check that the underlying infrastructure can handle it and perform well.

  • Failover testing

Failover testing determines if your enterprise search platforms handle and perform well under critical failures of the search infrastructure.

Key takeaways

It’s vital that your QE teams build comprehensive test strategies for content search that assure revenue for the business. They should also ensure that all systems are performing at their best, both functionally and non-functionally.

To achieve this, your engineers need to understand:

  • The underlying technology of your search stack.
  • The different libraries used to enhance consumer searches.
  • Any enrichment process involved in metadata regeneration.
  • How your business wants to capitalize on products and generate revenue based on consumer search patterns.

Get it right, and you’ll deliver the personalized search that your customers expect. Keep it accurate, relevant and seamless and your customers will keep coming back for more.

quality engineering free assessment