Tara Meyer
11月 19, 2025
Push notifications. Onboarding optimization. Engagement loops. Mobile publishers know these retention tactics inside and out.
But here’s what gets overlooked: your app retention strategies are only as good as your measurement framework. None of these tactics matter if you’re measuring retention wrong. Bad measurement leads to bad decisions. You’ll scale the wrong campaigns, cut the right ones, and wonder why your retention isn’t improving.
We see it more often than not. The problem doesn’t lay in the execution, it’s all about the measurement.
When your retention metrics aren’t valid or used to measure the right thing, every downstream decision tends to compound the error. It becomes a systematic problem and promotes ineffectiveness within and across teams.
“Most mobile publishers don’t know the difference between measurement methodologies. They see retention and assume they know what it is, but there’s a huge difference,” explains Roman from Tenjin. “Once you know the difference, you make much smarter business decisions.”
In this guide, we’ll hear from Tenjin’s Marketing Director Roman and Jas, a Senior Product Manager, who will discuss two retention measurement approaches most publishers confuse and why choosing the right one is crucial for an effective app retention strategy.
Retention strategy isn’t about creating vanity metrics and quick tactics. It’s about finding accuracy in a measurement methodology and the retention signal that drives sustainable growth.
What Is Retention And Why Does It Matter?
Retention measures how many users come back to your app after installing. It’s one of the clearest signals of user quality and long-term value potential.
Day 1 retention matters most because it’s your earliest indicator of source quality. As Roman from Tenjin explains:
“The most common metric is Day 1 retention. It’s one of the first signals that helps you identify if a source is good or not.”
For user acquisition managers, Day 1 retention is what usually determines their budget allocation. But, for product teams it validates onboarding effectiveness. And for publishers, it often drives revenue share agreements. It has many implications and it’s a very important stat.
But consistency matters. Retention numbers are meaningless if you’re not measuring them regularly.
Some Other App Retention Strategies (In Brief)
Before we dive into measurement methodology, let’s acknowledge that retention optimization does involve multiple tactics, and they absolutely can work in their own way. However, there’s a big difference between the list below and what we would call a measurement methodology.
- Onboarding optimization: Reducing friction in the first session to increase likelihood of return.
- Push notifications: Re-engaging users at strategic moments to bring them back to the app.
- Engagement loops: Building habits through rewards, streaks, or social features.
- Feature adoption: Guiding users to discover core value quickly.
These are all tactics that have been proven to improve retention.They’re backed by data, case studies, and industry best practices. However these are all リテンション improvement tactics rather than a true app retention strategy. This is a huge differentiator.
And in this article, we are specifically focusing on retention measurement methodology. These two things are closely related, but they are fundamentally different.
As Jas explains:
“We know our users have different use cases for running campaigns, working with publishers, and comparing with different analytics providers. Which is why we offer both [absolute and relative] retention cohorts or cohort strategies.”
Let’s find out what she means.
Two Ways to Measure Retention: Absolute vs. Relative
Most publishers see a retention number and assume they know what it means. But did you know that there are actually two fundamentally different ways to measure retention?
What Are Cohorts?
Jas already started to introduce retention cohorts, but before we dive deeper, let’s make sure we are all on the same page. At Tenjin, a cohort is a group of users with a common attribute.
This can really be any characteristic, but the most common cohorts tend to be a group of users who install your app on a specific date (acquisition date). It’s on when they installed your app. Cohort analysis lets you track how different groups of users behave over time. Other examples are users from the same country or those who come from the same campaign or channel, or even creative.
Using cohort analysis, we can determine how different groups of users behave over a period of time. For measurement, it becomes important to then define “when” they installed.
Absolute Retention (Calendar Date Method)
Absolute retention, also called calendar date or UTC-based retention, anchors measurement to the calendar rather than the user.
Here’s the core process: retention windows reset at a fixed time each day, usually midnight UTC. If a user installs your app and opens it just two hours later, they’ve crossed into a new calendar day. According to absolute retention logic, that’s Day 1 retention.
“Let’s say a user installed the app at 6 PM, so they became a Day 0 user. Then they come back at 8 PM. They’re a retained user on Day 1,” explains Jas.
It sounds straight forward, but there’s a problem because a user who comes back after a few hours isn’t really retained in a meaningful sense. The implication is that a user who engages for two hours straight can register the same retention signal as another user who deliberately returns 20 hours later. It is a calendar boundary, rather than user behavior, that defines retention.
Unfortunately, for data-driven decision-making, the calendar boundary introduces noise that can obscure genuine retention patterns, especially if you don’t know what to look for.
Relative Retention (24-Hour Method)
Relative retention flips the script by anchoring measures to the user, not the calendar. This is where retention measurement evolves into behavior science.
Here’s how it works: each user’s retention window begins at their precise install timestamp and extends 24 hours forward. Jas says, “When we say relative, it means relative to their installation date. You install the app at 1 PM, and you need to come back within 24 hours to be called a retained user.”
That means that your Day 1 retention window runs from 1 PM to 1 PM the next day. If you return at any point within that 24 hour window, you’re counted as retained.
The elegance lies in what this eliminates. There are no calendar issues, no timezone artifacts, no users trying to play the system at the midnight hour. A user in India and a user in Canada are measured by the same 24 hours, regardless of where UTC midnight falls in their local experience. The method is time-zone agnostic and user-specific. No calendar tricks, no false positives.
“The most common metric is Day 1 retention. It’s one of the first signals that helps you identify if a source is good or not. In my opinion, it’s better when the retention number is conservative. Because absolute is always higher than relative, I would always use relative retention,” Roman emphasizes.
This conservative approach isn’t being pessimistic, but focusing on precision. When your Day 1 retention reads 35%, you can be assured that these are users who genuinely came back, not users who happened to cross a calendar boundary.
With clarity, you can transform how you evaluate user acquisition sources, allocate budget, and get to work on those retention and engagement tactics. Relative retention rate becomes a strategic compass on your dashboard.
False Positive Signals
As you can tell, the difference between relative versus absolute retention is crucial for your decision-making process.
With absolute retention, even if a user installs at 11:59 PM and returns two minutes later at 12:01AM, they’re counted as retained. This creates inflated retention numbers that don’t reflect real engagement.
“This is sort of a false positive signal, whether you’re doing analytics or user acquisition, because I’m not sure if this user is really retained. They just came back in two minutes,” Roman points out.
For UA managers optimizing campaigns based on Day 1 retention, this matters enormously.
False positives can make a poor-quality source look good, leading you to waste budget on users who aren’t actually engaged.
Which Method Should You Use?
There is no strict dichotomy between good or bad retention types. The thing that matters most is your use case, and each one comes with its own lens and controls. The key is to align the retention definition with your decisioning needs, and to remain transparent about what the metric actually signals.
- Absolute retention is useful for:
- Understanding whether users return at all within a window
- Benchmarking raw activation
- Benchmarking onboarding completion.
It’s a blunt measurement that can have a clear signal about return frequency, but it can overstate engagement if short, incidental returns are common.
- Relative retention is useful for:
- More precise measurement for understanding sustained engagement
- Recommended if trying to determine long-term stickiness
Although it is slower to read, it is better at differentiating the true user health compared to quick, superficial re-entries.
Different Use Cases for Absolute vs Relative Retention
Absolute and relative retention are practical tools that shape decisions across analytics, UA, and publisher negotiations. Depending on whether you’re comparing platforms, optimizing campaigns, or aligning with publishers, picking the right retention matters. Below you’ll see two use cases that illustrate when to use each method and why the choice matters for outcomes and negotiating power.
Using Absolute Retention to Compare Analytics Providers
Let’s say you’re evaluating retention across platforms. You pull the retention rate from your MMP and see one number. Then you pull the same data in Firebase and see a different number. You’re quick to assume there’s a mistake or a technical issue.
“I’ve seen so many cases where a developer starts working with a publisher, they see the numbers are different, and they have no idea what it is. They think there’s some technical issue, but in reality, it’s just a different methodology,” Roman explains.
Most analytics providers, including Firebase, use absolute (calendar date) retention. If your goal is to compare data across platforms, you need to use the same methodology. If you use the same methodology, then your numbers are going to look similar.
“A lot of these analytics providers work on calendar dates because they care more about calendar date numbers on the analytics side. So if your use case is to compare different providers, then yes, absolutely use absolute cohort,” says Jas.
Be sure to check which methodology your marketing mobile analytics provider uses before you start comparing numbers. Using absolute retention ensures apples-to-apples comparison.
Using Relative Retention for Running UA Campaigns
If you’re a UA manager trying to identify quality sources and optimize spend, relative retention is your best bet.
“In my opinion, it’s better when the retention number is conservative. And, because the absolute is always higher than relative, I would always use relative retention,” says Roman.
Relative retention gives a more accurate picture of user engagement because it reduces the noise.
Jas agrees:
“If you want to analyze your users, run campaigns, and do user-based cohort analysis with minimum disruptions from different time zones, then using relative retention will give you the best answers for how your users are performing and how your campaigns are working.”
Conservative metrics lead to smarter spending. You want to find the best users as quickly as possible and waste as little budget as possible on users who won’t stick around.
Align Your Methodology with Publishers
When you’re reporting retention to publishers or negotiating deals based on retention metrics, alignment on measurement methodology is a necessary step.
This alignment matters because absolute retention is always higher than relative retention, especially in the first few days. If you’re showing relative retention and your publisher is expecting absolute, your numbers will look worse than they actually are. That gap can cost you better rates, preferred placements, or even deals altogether.
“You need to make sure that you know what type of retention they use, and make sure you’re reporting on the retention that’s higher. It’s in your interest,” advises Roman.
This isn’t about gaming the system. It’s about ensuring fair comparison. Publishers often default to absolute retention because it’s the standard in many analytics platforms and gives a clearer view of raw return behavior. But, if you’re using a different methodology without flagging it, you’re introducing friction and misalignment into what should be a straightforward conversation.
A simple clarification, a five minute conversation about measurement standards and methodology can prevent misunderstandings. It can protect your reputation and help you negotiate with more leverage.
Why Measurement Methodology Matters More Than You Think
As Roman puts it: “Once you know the difference, you make much smarter business decisions, whether it’s for publishing, user acquisition, or just analytics.”
The difference between absolute and relative retention shapes which sources you scale, which deals you accept, and whether your data tells the truth or just a convenient story.
- False positives from absolute retention can lead you to scale campaigns that deliver low-quality users.
- Misaligned methodologies create confusion and waste time troubleshooting non-existent technical issues.
- Using different retention definitions can make it look like you’re underdelivering when you’re actually hitting targets.
You can’t optimize what you’re measuring incorrectly. Getting the measurement right is the foundation that makes every other retention strategy effective.
How to Choose and Implement App Retention Strategies in Tenjin
Having problems deciding? The good news is that you don’t have to choose just one. Tenjin has a feature that allows you to toggle between absolute and relative retention.
User-Level Flexibility
One of the unique aspects of Tenjin’s approach is that cohort strategy is set at the user level, not the organization level.
“Every user can pick their own strategy. Maybe a UA manager wants to use relative, maybe a dev wants to use absolute,” explains Jas.
This means your UA team can analyze campaigns using relative retention while your analytics team compares data with Firebase using absolute retention. Everyone gets the view they need without compromising.
How to Set Up a Cohort Strategy in 30 Seconds
Setting your preferred cohort strategy in Tenjin takes about 30 seconds, but the implications ripple across every metric you track:
- Step 1: Go to “My Account” in your Tenjin dashboard. This is your control center for platform-wide settings.
- Step 2: Click “Manage User”. This opens your user-level preferences, where you can configure how Tenjin processes and displays your data.
- Step 3: Find “Cohort Strategy”. This is where the magic happens: cohort strategy defines how Tenjin buckets users and measures their behavior over time.
- Step 4: Choose “Relative” (24-hour method) or “Use UTC” (absolute/calendar date method)
Step 5: Click “Update”
Your selection is now live. All your reports, not just retention, but also for ROI, LTV, and ROAS, will now use your selected cohort strategy. This ensures consistency across every decision you make, from bid adjustments to publisher negotiations.
Tip: Document your chosen methodology and share it with your team (especially UA managers, analysts, and finance) so everyone interprets the data the same way. A shared understanding prevents costly miscommunications down the line.
Tenjin’s Real-Time Calculation
When you switch between strategies, there’s no data loss, no waiting, no manual re-processing.
“The coolest thing is that it all calculates on the fly. Switch to absolute, one set of numbers. Switch back, other numbers. Nothing is lost. You can always find the best approach and not lose the data,” Roman notes.
This isn’t just a UI trick. It’s a fundamental architectural advantage. Most analytics platforms bake cohort logic into their data pipelines at ingestion time, meaning switching methodologies requires reprocessing historical data or running parallel pipelines.
The Tenjin approach is a bit different because we are built on ClickHouse. This is a columnar database optimized for real-time analytics at scale, so instead of pre-calculating metrics based on a fixed cohort definition, Tenjin stores raw, immutable event data and dynamically generates queries based on your selected strategy.
“We’re dynamically generating queries with aggregated tables that are immutable, and they’re able to pick different strategies and run different queries on the fly. So it’s super, super fast,” Jas explains.
Measurement as a Retention Strategy
The smartest retention strategy starts with choosing the right metrics. Once you get that foundation right, everything else becomes clearer.
As Jas puts it: “Basically with this, we’ve made your cohort strategy simple.”
And that simplicity unlocks better decisions, smarter spending, and more accurate optimization. Because the best app retention strategies don’t start with tactics. They start with getting the metrics right.
Ready to see cohort flexibility in action? Explore Tenjin’s dashboard to toggle between absolute and relative retention in real-time, or watch the full conversation between Roman and Jas for deeper insights on retention measurement.
Read the full transcript.
Roman: The most common metric is day one retention. It’s one of the first signals that helps you identify if a source is good or not. In my opinion, it’s better when the retention number is conservative. Because absolute is always higher than relative, I would always use relative retention.
***
Hi everyone, welcome to another edition of Tenjin ROI 101. This is Roman, and today I’m joined by Jas from our product team.
Jas: Hey guys, I’m Jas, or Jaspreet, but everyone likes to call me Jas. I’m a Senior Product Manager at Tenjin, and I’m happy to be here today, Roman.
Roman: It’s an exciting topic we have here today. We’ll talk about absolute and relative cohorts—or absolute and relative retention. Most mobile publishers don’t know the difference. They see retention and assume they know what it is, but there’s a huge difference. Once you know the difference, you make much smarter business decisions, whether it’s for publishing, user acquisition, or just analytics.
Let’s jump into it. We’ve prepared some beautiful slides, and toward the end, Jas will show us how it works in Tenjin.
Here is an absolute cohort. Jas, do you want to explain what it is?
Jas: Sure. An absolute cohort—you can also call it a calendar date cohort. Before I go into that, let me talk a little bit about what a cohort is.
When we talk about cohorts, especially at Tenjin, we mean when a user installs the app, and then we build cohorts based on that. When we say “absolute cohort,” we’re talking about when a user installed the app by absolute calendar date. In Tenjin’s terminology, we use UTC time zone right now.
Let me give you an example. Let’s say your UTC time ends at 7 PM local time. If a user in your app installs before 7 PM, they’re Day Zero. If they install after that, Day Zero starts after 7 PM. This changes how retention works a little bit.
The way retention changes is: let’s say a user installed the app at 6 PM, so they became a Day Zero user. Then they come back at 8 PM—they’re a retained user on Day One.
I just talked a lot about that, but in simpler terms: absolute is just a description for the cohort strategy we use. It’s based on calendar date.
Roman: Yes. As you can see here, even if a user was acquired on a specific day—let’s say 11:59 PM—and they come back two minutes later at 12:01 AM, they’ll be considered retained if we’re talking about retention cohorts, right?
Jas: Yes. This is sort of a false positive signal, whether you’re doing analytics or user acquisition, because I’m not sure if this user is really retained. They just came back in two minutes.
Roman: Right. But sometimes, if you want to do more calendar date analytics or for other use cases we can talk about later, then using absolute cohort might make sense for your use case.
Jas: Yeah, absolutely. There are no good or bad retention types—there are just different use cases. And we have a slide for that too.
Roman: Another one is relative retention.
Jas: So I talked about what a cohort is—it’s basically based on installation date. Absolute is a calendar date, while relative describes a 24-hour period.
When we say “relative,” it means relative to their installation date. Let’s say, Roman, you install an app at 2 PM your time. It doesn’t matter if it’s UTC time zone or US New York time zone—it’s relative to your installation time. That’s what we mean by relative, and it’s 24 hours.
So you install the app at 1 PM, and you need to come back within 24 hours to be called a retained user. That’s the core difference between a relative cohort versus an absolute cohort where we use UTC time zone.
Roman: Makes sense. Now let’s see the different use cases.
For relative retention, I can start based on my experience in user acquisition. The most common metric is Day 1 retention. It’s one of your first signals that helps you identify if a source is good or not. In my opinion, it’s better when the retention number is conservative. Because absolute is always higher than relative, I would always use relative retention.
Jas: I agree with that. If you want to analyze your users, run campaigns, and do user-based cohort analysis with minimum disruptions from different time zones, using relative retention will give you the best answers for how your users are performing and how your campaigns are working.
Roman: Now, for comparing with analytics providers—and the disclaimer here is that we’re using this terminology “absolute versus relative,” but analytics providers might use different labels. My advice is always to check what methodology they use before you compare the data.
Based on our research, we’ve seen that most analytics providers—for example, Firebase—use absolute retention. Once you know that, you save yourself a lot of time when you start comparing data between Firebase and Tenjin, or Firebase and any other MMP.
Jas: 100%. That’s one of the biggest use cases for using absolute cohort strategy. If your use case is to compare different analytics providers or different data systems in general, using the absolute cohort strategy will be best because it gives you similar results. A lot of these analytics providers work on calendar date because they care more about calendar date numbers on the analytics side. So if your use case is to compare different providers, then yes, absolutely use absolute cohort.
Roman: The third use case is when you’re working with a publisher. Make sure you know what type of retention they use, and make sure you’re reporting on the retention that’s higher—it’s in your interest. And again, absolute is always higher than relative retention.
I’ve seen so many cases where a developer starts working with a publisher, they see the numbers are different, and they have no idea what it is. They think there’s some technical issue, but in reality, it’s just a different methodology.
Jas: 100%. I think the initial days of your campaigns or initial cohorts—relative is lower than absolute. But as you go over time, the numbers sort of match. Your older cohorts will align, but yeah, Roman, I do agree with you—you need to know that for sure.
Roman: Very good point. This is mostly about Day 1 retention. Day 7, they’re kind of the same, but you want to find the best users as soon as possible and spend as little money as possible on not-so-good users. That’s why we think it’s important, and that’s why we’re doing this video.
Jas: And Roman, we know what our users want. We know our users have different use cases for running campaigns, working with publishers, and comparing with different analytics providers. Which is why we offer both retention cohorts or cohort strategies that you can toggle on our dashboard. Let me show you.
Roman: Yes, I’ll stop sharing while Jas prepares to share the screen.
Give us a like if you learned something new today. Leave us a comment if you have any thoughts about these things or want us to cover more topics like this, and subscribe for the next episodes.
Jas: Alright, can you see it?
Roman: Awesome.
Jas: So this is our demo account and our beautiful dashboard. You can go under “My Account” and then “Manage User.”
Remember, the cohort strategy is a user preference—it’s set at the user level, not the org level. So every user can pick their own strategy. Like Roman said, these are different use cases. Maybe a UA manager wants to use relative, maybe a dev wants to use absolute or UTC—that’s what we call it. You can pick your own toggle on our dashboard for your own user.
It’s very simple. You pick “Cohort Strategy,” you pick relative or “Use UTC”—so relative is the 24 hours I talked about, UTC is just absolute. Then you can read all the details that describe the differences between relative and UTC time zones. You just hit “Update” and you’re all done.
Then you go back to all reports, and everything—not just retention, but ROI, LTV, ROAS—all your cohorts will now use the cohort strategy that you picked under “My Account.”
Roman: And the coolest thing—I was kind of surprised—is that it all calculates on the fly. Switch to absolute, one set of numbers. Switch back, other numbers. Nothing is lost. You can always find the best approach and not lose the data.
Jas: Yeah. And you see, Roman, how fast this was? I do want to talk a little bit about this. This is because of our really cool tech team. I don’t know if users watching this have noticed, but there have been amazing improvements in how fast our dashboard has become. That’s because of the re-architecture we did a while ago that leverages ClickHouse. We’re dynamically generating queries with aggregated tables that are immutable, and they’re able to pick different strategies and run different queries on the fly. So it’s super, super fast with very minimal changes. We were able to make this change happen with some really cool, innovative re-architecture that we did with our tech systems. Features are so easy to release now. Like our December newsletter—if you’re not subscribed, subscribe—it had so many new features to share.
Roman:
Yeah, yeah. Basically with this, we’ve made your cohort strategy simple.
Jas: Yeah, that’s a good end for this episode.
Roman: We have another video about retention where we invited a game designer, and he explained everything you need to know about retention in general. We’ll link it somewhere here in this video so you can drill down into this rabbit hole of retention if you want.
Thanks a lot, Jas.
Jas: Thank you, Roman, for having me. Have a good one. Cheers!
Roman: Bye-bye, guys!
Marketing Content Manager
Tara Meyer