Author: Robin Jia,Tommy Lin
Last Update:2024/06/14
This model is only applicable to AMC Instances that include DSP advertisers
1. Model Introduction
The Audience Label model evaluates the performance of each audience label in DSP delivery from dimensions such as number of people covered, number of purchasers, DPV, DPVR, and purchase conversion rate. By comparing the key metrics of different labels, you can identify the best-performing and most potential audiences, optimize the audience label combination, and further cover more high-value people. This model supports custom parameters such as time range, ad scope, etc., to focus on specific analysis objects.
This model can help answer the following business questions:
- In DSP ads, which untargeted label audiences are worth trying for delivery?
- What are the portrait characteristics of the audiences covered by DSP ads? How do different characteristic labels perform in ads?
2. Model Interpretation
This model mainly consists of three parts: insight cards, bubble chart, and ranking table.
In the model's top menu, the predefined model is for fixed Amazon users, DSP ad type (allowing free selection of DSP advertisers associated with the Instance), and all ASINs under the store. Users can independently select the analysis time range and the interval of audience label coverage of interest. To analyze audience data for specific ads/ASINs, you can create a custom model for filtering (see the "Model Customization" section).
The above filtering conditions can be flexibly combined to find audience label combinations that align with business needs and have actual delivery value:
- Utilizing delivery status filtering to compare and analyze the performance differences between delivered and undelivered labels
- Utilizing label coverage interval filtering to find refined label combinations with moderate coverage
- Utilizing actual reach filtering to exclude label data distorted due to too small sample sizes
In the Audience Label Analysis model, the filtering function is very important. Utilizing different filtering conditions can find more accurate potential audiences of different magnitudes.
Using the delivery status filter, you can select to view the performance of potential audience labels that have been delivered or not delivered.
The label audience package size is the estimated reach user magnitude of each audience label. Since audience labels of different magnitudes generally cannot be directly compared in terms of ad performance values, by narrowing this range, you can find smaller and more precise labels (mainly IM type). Expanding this range can find larger and broader labels (Demographic, Lifestyle, etc.).
The reached UV number refers to the number of reached users of each audience label. By filtering this metric, you can exclude some audience labels with low data reliability due to too small reach.
The insight cards list the three best-performing audience labels based on key metrics such as Total ROAS (switchable), facilitating quick focus on analysis priorities.
The bubble chart intuitively presents the audience labels with the best overall performance (default top 20 rankings in purchase cost and detail page view cost, adjustable metrics and display quantity). Among them, the bubble size represents the number of reached people, the color represents the label type, and whether it is a solid bubble represents whether there was ad exposure within the selected time range. The coordinate axes can be dragged, and the chart can be zoomed using the mouse wheel to focus on key areas. From the chart, you can intuitively see the overall performance distribution of various targeted and untargeted labels and find potential high-value labels.
On the coordinate axes of default metrics, bubbles closer to the lower left corner represent audience labels with better performance. By observing the position relationship between the solid bubbles of delivered audiences and the dotted bubbles of undelivered audiences, you can easily find potential high-value audiences.
The table section provides detailed ranking data for each audience label on various metrics, and by default displays the top 50 labels. You can click the download button in the upper right corner to export the data as an Excel file. Hovering the mouse over a label will synchronously display the ranking of that label on other metrics through a popup, facilitating vertical evaluation of the overall performance of a single label.
Click the edit button in the upper left corner to expand the sidebar. Here you can select more data metrics, up to a maximum of 10 simultaneously. Some metrics can only be used after defining the monitored ASIN in the custom model.
In addition, you can filter by audience type. This function currently only takes effect on the table section.
Using the audience type filter condition, you can see the performance data of labels under a certain type. This is especially useful when selecting the Demographic or Life event audience types. These two types of audiences, due to their large magnitude, generally do not appear in the list of best-performing audiences, but have certain brand-dimensional insight value. By filtering the audience category and audience magnitude, you can view the performance of type audience labels.
3. Model Customization
Compared to the predefined model, this model supports custom selection of advertisers/ad campaigns to focus on ad combinations of interest. It also supports custom setting of tracked ASINs, which will focus the analysis on conversion data of specific products after setting.
Potential audiences generally vary greatly between different product lines. Using the ad campaign and purchase ASIN customization functions to lock in a specific product line, you can establish multiple custom models to observe the differences in audience labels between different products.
4. Model Data
The core data of the DSP ad Audience Label Analysis model includes: the number of people covered, the actual number of people reached, the number of behavior conversions, and the percentage of various audience labels. Among them, the number of people covered represents the potential target audience size defined by the label targeting rules, while the number of people reached and converted reflects the actual response of the audience to the ads and products. By combining the number of people covered with the actual reach and conversion data, the actual ad effectiveness of different labels can be evaluated more comprehensively. In comparative analysis, the high or low values of various metric data can reflect the differences in ad attractiveness, conversion efficiency, and other aspects among different label audiences.
However, when interpreting the data, the following points also need to be noted:
- The comparison of label coverage is often only more referential when the magnitudes are similar. When the difference in coverage is too large, the effect data such as clicks and conversions may not be comparable.
- The actual reach of some labels may be relatively low, resulting in too few purchase conversions, and the conversion rate data may fluctuate greatly. In this case, it is recommended to prioritize metrics that reflect attractiveness, such as the number of clicks and click-through rate, and the conversion data is only for reference. Consideration can be given to expanding the coverage size to improve the referential value of the data.
- For labels recommended by the system with excellent overall performance but low coverage, when considering new delivery attempts, it is necessary to be aware of whether the quality of the expanded audience can be maintained. For such labels, it is recommended to prioritize small-scale trial delivery, observe the actual effect changes after audience expansion, and then flexibly adjust the delivery scale.
In summary, when interpreting audience label data, it is recommended to focus on the inherent connections between different data metrics and comprehensively judge the delivery value of different labels. In addition, be careful not to excessively rely on a single metric to make delivery decisions, and fully consider factors such as the audience base and conversion sample size of each label. Through more comprehensive analysis and flexible delivery strategies, the value of audience labels can be better utilized.
Glossary
Type | Term | Description |
Dimension | Label Name | User behavior segment name |
Advertiser | DSP advertiser | |
Target Status | If the user behavior segment was targeted by the selected DSP campaigns | |
Category | Top-level of audience segment taxonomy | |
Subcategory | Second level of audience segment taxonomy | |
Metrics | UV | Unique Viewer |
Click-throughs | Number of click events | |
Impressions | Number of impression events | |
Reach UV | Deduplicated number of impression users | |
Total Cost | Total Cost of ads | |
Click-throughs UV | Deduplicated number of click users | |
Total ATC | Number of add-to-cart events | |
Total ATC UV | Deduplicated number of add-to-cart users | |
Total DPV | Number of detailed page view events | |
Total DPV UV | Deduplicated number of detailed page view users | |
Total NTB Product Sales | Total New-to-Brand Product Sales | |
Total NTB Purchase | Total New-to-Brand Purchase | |
Total NTB Purchase UV | Deduplicated number of new-to-brand purchase users | |
Total Product Sales | Total Product Sales | |
Total Purchase | Total Purchase | |
Total Purchase UV | Deduplicated number of purchase users | |
Total PR | Total Purchase Rate (Total Purchase/Impressions) | |
Total PR UV | Total Unique Viewer Purchase Rate (Total Purchase UV/Reach UV) | |
Total PR TV | Total Unique Viewer Purchase Rate (Total Purchase UV/Reach UV) | |
Total ROAS | Total Return on Ad Spend (Total Product Sales/Total Cost) | |
CTR | Click-through Rate (Click-throughs/Impressions) | |
CTR UV | Unique Click-through Rate (Click-throughs UV/Reach UV) | |
eCPC | Effective Cost per Click (Total Cost/Click-throughs) | |
eCPC UV | Effective Cost per Unique Click (Total Cost/Click-throughs UV) | |
eCPM | Effective Cost per Thousand Impressions (Total Cost/Impressions*1000) | |
Total ATCR | Total Add-to-Cart Rate (Total ATC/Impressions) | |
Total ATCR UV | Total Unique Add-to-Cart Rate (Total ATC UV/Reach UV) | |
Total ATV | Total Average Transaction Value (Total Product Sales/Total Purchase UV) | |
Total CPATC | Total Cost per Add-to-Cart (Total Cost/Total ATC) | |
Total CPATC UV | Total Cost per Unique Add-to-Cart (Total Cost/Total ATC UV) | |
Total CPDPV | Total Cost per Detailed Page View (Total Cost/Total DPV) | |
Total CPDPV UV | Total Cost per Unique Detailed Page View (Total Cost/Total DPV UV) | |
Total CPP | Total Cost per Purchase (Total Cost/Total Purchase) | |
Total CPP UV | Total Cost per Unique Purchase (Total Cost/Total Purchase UV) | |
Total DPVR | Total Detailed Page View Rate (Total DPV/Impressions) | |
Total DPVR UV | Total Unique Detailed Page View Rate (Total DPV UV/Reach UV) | |
Total NTB CPP | Total Cost per New-to-Brand Purchase (Total Cost/Total NTB Purchase) | |
Total NTB CPP UV | Total Cost per Unique New-to-Brand Purchase (Total Cost/Total NTB Purchase UV) | |
Total NTB PR | Total New-to-Brand Purchase Rate (Total NTB Purchase/Impressions) | |
Total NTB PR UV | Total Unique New-to-Brand Purchase Rate (Total NTB Purchase UV/Reach UV) | |
Total NTB ROAS | Total New-to-Brand Return on Ad Spend (Total NTB Product Sales/Total Cost) |