Tara Meyer
November 4, 2025
Did you know that most predicted LTV (pLTV) models weren’t built for hybrid monetization apps? When you’re monetizing through both in-app purchases (IAP) and in-app purchases, user behavior gets complex fast. Standard LTV models can’t handle it, and most pLTV metrics can’t either. savings, and deep insights.
“There’s a lack of tools out there supporting
– Jaspreet Bassan
hybrid monetization apps.”
Senior Product Manager at Tenjin
The thing is, hybrid monetization is everywhere these days, but the tools for it? Not so much. IAP + IAA strategies have become the standard for many apps, also mobile games, yet most analytics platforms still treat monetization like it’s single-stream. That’s a real problem for predicted LTV, which is essential when you’re running user acquisition (UA) for hybrid games. You need accurate forecasts across both revenue channels to optimize spend effectively. The tooling gap was obvious, so we built what was missing.
But building a pLTV for hybrid monetization isn’t straightforward, explains Tenjin’s Senior Product Manager, Jaspreet Bassan. She sits down and shares the four major challenges along the way, so hit play and keep scrolling to learn more…
Challenge #1: Signal scarcity

Predicted LTV models need high volumes of early signals to produce a statistically significant, accurate result. With hybrid monetization models, those signals are often sparse in the critical early days when you need prediction the most.
Our solution uses neural networks, which Jaspreet explains:
“For us neural networks were the absolute correct choice because they’re able to handle large volumes of data really well, which is exactly what this unique challenge proposes.”
“And, they’re able to find patterns. They’re also able to learn from trends really well, so you don’t have to do a lot of feature processing.”
She continues to share a key outcome:
“It can take your day zero data, learn patterns from it,
and then make predictions for cohorts until day 30.”
Neural networks can identify complex patterns from insufficient early signals. Our neural networks leverage granular ad impressions data as a rich early signal source to overcome signal scarcity. We look at how users interact with ads from the start, providing high-volume and early-stage data for more accuracy.
Challenge #2: Whales and other outliers

A classic problem when dealing with any modeling is outliers. These are whales or any out-there stat that can cause measurement distortion. For example, one user spends $1000 on an IAP, but most users are spending $10. Traditional pLTV models aren’t sure how to deal with that and give you wildly optimistic results or show you missed revenue potential.
Jaspreet explains her teams’ approach, “we start by using the immature cohorts as one of the features. Then we add that data to the model, then the model is trained to predict based on it. So every two hours, your cohorts are maturing and the model takes that into account and gives new predictions. ”
There is a continuous stream and monitoring. To overcome the outlier issue, there’s a dynamic normalization that updates regularly:
“We use a normalization methodology that refreshes every two hours to take into account any anomalies in the data. This means we can automatically adjust for outliers or any unusual spending patterns.”
That means casual spenders still count, but they won’t bring your average down. It also means that you’re not overestimating value based on a few big spenders. Your predicted LTV picture is more realistic and accounted for.
Challenge #3: Split-metrics aren’t holistic

Fragmented revenue insights cause gaps. Most platforms force you to track IAP and IAA revenue separately. That creates more work, since you’re juggling two different pLTV metrics, when they really should be combined into one holistic perspective.
When your revenue data are split, you can’t quickly identify which user segments are actually profitable across both streams, states Jaspreet. Further, you start to optimize in silos, which is never ideal, especially for hybrid or hybrid monetization apps.
She explains that “our existing customers know that we have a combined LTV metric, which is the actual LTV that is available after your cohorts have matured. And we know that hybrid apps use that metric a lot. So, we wanted to provide a similar metric that combines both IAA and IAP.” She emphasizes that:
“This is going to be very impactful for our customers. And now we can also provide derivative predicted metrics like pROAS and pROI. It strengthens our overall infrastructure – reporting infrastructure and data infrastructure – is built in such a way where making predictions for
the actual LTV is also a whole lot easier and meaningful.”
Finally, we have created a single, combined pLTV metric that can be accessed directly from the Tenjin Dashboard shares Jaspreet. You get one single-metric that reflects total predicted value across both IAA and IAP, with short-term and longer-term forecasts included in the default.
Our solution brings clear, holistic visibility into action. No more spreadsheet gymnastics or stretching the stats to understand your profit margins. You can act sooner and feel more confident about it.
Challenge #4: “Close enough” isn’t good enough

Most pLTV models hover around 70-80% accuracy. That might sound decent, but when you’re making real-time budget allocation decisions worth thousands of dollars, a 20-30% margin of error is unacceptable. Missing out can be especially hurtful for budget-conscious app developers and marketers, or those located in emerging markets. We can all emphasize that any inaccurate predictions lead to wasted spending or missed opportunities.
UA optimization requires confidence. If you can’t trust your predictive analytics, you can’t confidently reallocate budget, adjust bids, or scale campaigns. In fast-paced markets, hesitation comes at a cost.
A high-accuracy pLTV is critical for everyday, real-time optimization decisions:
“90% accuracy is fantastic and something that
is really rare in this industry.”
She points out that “a lot of that has gone into building predicted LTV, which is why I think this 90% accuracy rate we received was possible. We added a lot of expert insight into this and approached this problem from the customer perspective.”
This incredible benchmark was created through rigorous neural network training and analysis. Furthermore, our pLTV is available at the app, campaign, and country level. That way you can allocate budgets based on different countries for your campaigns and channels, making the possibilities endless.
This performance threshold makes our pLTV metric reliable enough for everyday optimization decisions. So go ahead, optimize with confidence, not caution, and bet your UA budget on.
“We use Tenjin’s pLTV metrics to monitor the performance of new cohorts in real time without having to wait for long-term data. This allows us to make UA decisions much faster.”
– James McClelland, Tapped
Read the full transcript
Jas: I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
Roman: Hi everyone, welcome to another video about a Tenjin update. Today we’re talking with Jaspret from our product team. Hi Jas.
Jas: Hello Roman.
Roman: And we’re talking about LTV prediction—what it is, why it matters, and how you can access it in Tenjin. So yeah, let’s start with a few words about yourself Jas.
Jas: Thanks, Roman. Hi everyone, I’m Jas or Jaspreet, and I’m the Staff Product Manager at Tenjin. I usually work on the data and dashboard side of things, and we’ve built predictive LTV (pLTV), especially for all of you folks out there.
Roman: Exactly. Uh, we prepared for this video a couple of slides. I’ll just share my screen and we’ll go through them and we’ll have a sort of product conversation.
So yeah, LTV prediction tailored for apps with hybrid monetization. Hybrid is a new hyper now. There is a lot of hype, but we felt like there aren’t enough tools built for that subset of apps. Um, and LTV prediction is kind of an essential tool when you run user acquisition for a hybrid game.
Let’s see why you need to use it. Want to comment on that, Jas?
Jas: Yeah for sure, I think that Roman, you make an excellent point. We do need more tools for hybrid, they tend to be lacking out there. But at Tenjiin, we are super focused on this.
Just to give a small intro kind of thing, focusing on hybrid specifically. So for apps that have pure in‑app ads (IAA), your revenue will most likely mature a lot faster, right? So for instance on day 3 or day 7. Whereas for hybrid apps or apps using a lot of like in‑app purchases (IAP), your IAA revenue might mature faster, but your IAP revenue may take weeks or months to mature.
Which is why we have pLTV for hybrid apps! With pLTV, you don’t need to wait for cohorts to mature. Or, like wait to make your campaign optimizations after day 30, day 60, or longer. Um, instead you can make your campaign optimization decision a lot earlier, right? So instead of waiting that long, waiting for your cohorts to mature, you can act on day two for instance.
Roman: And here I have this hypothetical example of Campaign A and Campaign B. As we can see here on day 0, it’s like Campaign B is a clear winner here. However it might not be the truth for a hybrid app because a user might come in, make a purchase and it will overweigh all of the revenue that had been gained through showing the ads. And here’s how it would look like if there were predictions for day 14.
We’ve already worked with a couple of companies on this. Jas worked closely with James and the team to work on the whole predictive LTV project. And uh yeah, let just read the quote:
“We use Tenjin’s pLTV metrics to monitor the performance of new cohorts in real time without having to wait for long-term data. This allows us to make UA decisions much faster.” – James McClelland, Tapped
This is exactly what Jas just said, right?
Jas: Yeah. Yeah. Um, you know Roman, so when we were doing the product development for predicted LTV we worked a lot with customers, very closely and Tapped was one of the customers even in, like, the Alpha stage. They used our predictions from much earlier and now this product is out for everyone and is available to everyone to use.
We have gotten very similar feedback from almost all our customers. You learn how they’re
using this and they’re able to make user acquisition decisions much much faster, right.
Roman: And here it is again, like, an emphasis on speed. So if I’m waiting for 14 days, then I need to see what campaign actually performs better. If I use prediction LTV before day 14 or whatever day it is, you can reinvest money quicker, as you can see here… you will just get more money at the end.
Roman: Um, maybe for someone who doesn’t know what a hybrid app is, uh let’s talk about that. Jas, what is a hybrid app?
Jas: Cool. Yeah. So, when we say hybrid apps, we mean an app that uses the two revenue models that I quickly introduced to you in the beginning of this video, right? So one of them is um in-app advertising, which basically means your app is making revenue by showing ads to your users. And the other business model is IAP, or in-app purchases. So in-app purchases could be either you are selling products one time, it could be subscriptions, um it could be other things, but you are also making money by selling a product within your app.
So hybrid apps, they do both of these things right? They make money by showing ads and also your users can purchase your products within your app. So you’re able to make money by both ads and purchases.
And when an app uses both these models in tandem, we call them hybrid apps. Now you can see this is an app as an example of the distribution of in-app ads versus in-app purchases. It can vary. It can be like I don’t know 20% IAA and 80% IAP. Um but this is a great example here of what a hybrid app looks like.
Roman: Exactly And it seems like the future is hybrid apps…
Jas: 100%
Roman: Right, so one of the things, but it’s a challenge right, because it’s so new. So it’s a challenge to predict LTV hybrid apps. In the past you might find LTVpredictions for app purchases for IAA apps. So far no one has done the predictions for hybrid apps. So here, we’ve highlighted the word neuronet network. Maybe I’ll ask just for you to comment on that
and talk a little bit more how you approached this challenge.
Jas: Cool. Yeah. So Roman is absolutely right that Tenjin is solving this problem in a very unique manner, because there is a lack of tools out there supporting hybrid apps. We’re focusing on that market right now and you know, Roman, you’re absolutely right that that is the future. That’s what we are seeing from an industry standpoint of view.
And, obviously when it comes to user acquisition decisions or campaign optimization, um however you want to call it, we’re using ad mediation data. We’ll talk more about that later on.
We’re using a lot of highly granular data available at ad impression, user level. Um, and that goes the same with in-app purchase data. And when you’re using a lot of this data, you want a machine learning algorithm that can handle it really well and it can handle this unique challenge
that you’re talking about – to support hybrid apps. Um, and neural networks are a natural choice for it.
I don’t want to go too deep into the technical aspect of what neural networks are, but you can think of neural networks as um, these are actually called artificial neural networks.
They are machine learning, one of the machine learning technologies where you can train your model and then train your model using historical data and that model is able to learn from historical patterns and trends and then make predictions.
Uh I don’t know Roman if you have any more questions I can keep talking about this. Um one thing why we used a neural network versus other things. So in our product development cycle, we did look into using simpler models. We did, you know, use many other benchmarks, a lot of benchmarks.
We’ve done a lot of benchmarks, and for us neural networks were the absolute correct choice because:
A) They’re able to handle large volumes of data really well, which is exactly what this unique challenge proposes and
B) They are able to find patterns, they’re able to learn from trends really well, so you don’t have to do a lot of feature processing.
Why would anyone use Tenjin predictions versus you know doing your own thing? We’ll talk more about that but from a neural network perspective we want this model to learn from data across all apps and all organizations and find patterns. So even if your campaign is new, let’s say, and it doesn’t have any data in the past, our model will learn from similar apps and similar orgs and make accurate predictions for your campaign.
Roman: Yeah, that was my question actually. Let’s go through some challenges for the hybrid apps. You actually already mentioned this, right?
Jas: Yeah, I did. Yeah.
Roman: Uh a lot of signals and we use impressions, ad impressions. Um and we deliver…maybe we can talk about the cohorts. The predictions are available on day zero, right?
Jas: Yes. So uh let me rephrase that actually. Um so our model learns using day zero cohort, whatever data is available and using day zero cohort it will make predictions for all the cohorts in the future until day 30. We are focused on day 30 right now. We’re going to add, well – we are in the middle of adding support for longer cohorts until day 365.
But that’s what we’re doing right now: it will take your day zero data, learn patterns from it and they make predictions for cohorts until day 30.
Roman: And I guess one of the most important ones we’re showing, if we’re talking about the apps showing ads, is like an ad impression, right?
Jas: Yeah. Because this is a campaign, the use case is you know user acquisition and campaign optimization. We are using very granular ad impressions as one of the features. We have a lot of features that we’re using, like around a hundred, and this is one of them.
Roman: Yeah. Yeah. Yeah. This is just to answer one of the questions that I had, also internally. How do we make sure that early predictions, (because from the hyper casual days, I know this is one of the super important factors to understand) how the campaign performs early on. Like you need both early and late and this is why and how we do this.
The next challenge (it was mentioned in the example from one of the first slides). We had two campaigns and at some point Campaign A actually got better than Campaign B, presumably thanks to purchase. Right. And, so we update a prediction every two hours. Can you explain what it means?
Jas: Yeah. So we are using a lot of your data, like hundreds of features: the distributions of your ad impressions, how your campaign is performing based on the changes you’ve made , to your campaign like bids etc. And other real things happening right outside of your whole entire UA strategy.
So all that data goes into the model and then the model will make new predictions based on it. Especially when your cohorts are not mature, right? Once your cohorts are mature, the model has a lot of that data already and has made very good and accurate predictions. But, especially when you’re making any changes, you know, any changes are happening to your campaigns. This is why we make predictions quickly, to factor all the real world changes happening or distribution changes that are happening and then we give you fresh predictions that reflect the reality of that world.
Roman: Is it the same for the first two hours, so I’ve started a campaign. I’m starting to see the data for the campaign. Does it mean that the next two hours after I see it, I should get some LTV predictions or I need to wait longer?
Jas: So Roman, my understanding is that we need a mature like day zero data, and then we’re able to make predictions for 1 to 30. So let’s say that you come back on day one, right? So on day one and to day 30, obviously your cohorts haven’t matured.
We use the immature cohorts also as one of the features and we add that data to the model and then the model will predict based on it. Right? So every two hours, your cohorts are also maturing and the model takes that into account and it will give you new predictions, you know for like day one to day 30. Um so that’s what we are basically doing.
Roman: Super gotcha. Uh challenge number three as I mentioned, there were already products that can do prediction for IIA or IIP.
Jas: Mhm. We decided to do it prediction in one metric for both and we made it available in the dashboard.
Roman: Maybe Jas, you can take us through some of the thinking when this was decided… Why do we do that?
Jas: Yeah. So for product management development, the way we wanted to approach this problem was let’s build, you know, an Alpha really quickly. And, let’s have some of our customers drive the feedback and then evolve the product organically.
While we were doing this, we received a lot of the early feedback. That early feedback is very critical for making a good product for LTV prediction specifically. We want to build something quick that solves the use case for hybrid, so we went for this one metric.
So our existing customers know that we have LTV, a combined LTV metric which is the actual LTV that is available after your cohorts have matured. It is also available for immature cohort data.
And we know that hybrid apps use that metric a lot. So, we wanted to provide um a similar metric that combines both in-app ads and IAP.
For Tenjin specifically, our reporting data and infrastructure is built in a way where we already have this historical or actual LTV available. We ended up making predictions for that metric. Think of that metric as your Y axis in machine learning terminology, and then you will use all the historical trends, and then make a prediction for that right, and that’s what we are doing.
So A) it was a decision from a product perspective because we know our customers use this and this is going to be very impactful for them. And then using this metric we can now provide derivative predicted metrics like pROAS and pROI. But obviously our infrastructure, the reporting infrastructure and data infrastructure is built in such a way where making predictions for this actual LTV is a lot easier.
Roman: Gotcha. Now to the last challenge for hybrid apps, which is accuracy. We have this stunning accuracy, average accuracy of 90%.
Jas: Yes.
Roman: Um, was it always like that? Did we have to work on that? Maybe you have some benchmark on the industry.
Jas: Yeah. Um so this is a great question and I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
This wasn’t always the case. Like I said, I mentioned briefly at the beginning of this video, like how we did product development. We started with Alpha beta production. Obviously for Alpha the accuracy was not as good.
Before Alpha was even started, we did internal benchmarking. Our GFS had predictions and we benchmarked them, and then we did our own internal benchmarking with other machine learning models. And then the way machine learning development goes is you want to look at different features: see which one is giving you good accuracy, which one ruins your accuracy, do some feature engineering, understand your data really well, understand your business really well.
This is a very organic way of how we did things, right? Like okay, how do customers make decisions? Talk to experts for instance. So a lot of that has gone into building predicted LTV and which is why I think this 90% accuracy that we received was made possible. It is because we added a lot of that expert insight into this and approached this problem from the customer perspective.
How do they make decisions? We involved them early on, so it was a long process. It wasn’t an easy process to achieve that. We worked very hard to get that 90% accuracy. That said, I do want to mention that although our average performance of the model is 90%, the accuracy varies from campaign to campaign, app to app and country to country.
I think I forgot to mention this but our predictions are available at app, campaign, and country level. So you can allocate a budget based on different countries for your campaigns, different campaigns, different channels, right? Like if your channel A is not performing for this country, you can do all sorts of things for all sorts of like countries and channels and campaigns.
Um, when it comes to accuracy of a machine learning model, it can vary from different campaigns to different countries, right? 90% is the average. So far, our customers have said they’ve seen 90% accuracy which is great but I do want to add this caveat that your accuracy can depend on your country and your campaign and channels, which is why we’re thinking of adding confidence intervals in the future. So this will help you make your decisions a lot more confidently.
Roman: Right. Right. Yeah, I think this is an important point to emphasize. Again like it’s an average even though it’s a great result. Uh but it really depends on a lot of factors. It’s still a prediction. Cool. Uh I think that was the last challenge, now the main question is how to get started.
It’s available on all paid plans. So now, we actually started with a cancel anytime feature. So you can start with us, pay 200 bucks and get access to predictive LTV. If you’re already a Tenjin customer, then you already have access to it.
Just click on the edit metric on the dashboard, find predicted LTV metrics (pLTV), predicted ROAS (pROAS), and predicted ROI (pROI). So no feature gates, just try it out and see how it works. If you already tried it, leave us a comment. If you have any questions, also leave us a comment.
Any last thoughts, Jas?
Jas: Yeah, I just want to say Roman, thank you for highlighting that. Using Tenjin is now easier than ever. You just need to go on our dashboard. You just need to subscribe to our $200 plan and you can cancel it whenever you want. Um, so if you’re interested, try this out. Um, yep. And I we’d love to get your feedback on this, or anything else.
Roman: Super. Thanks a lot, Jess. Give us a like if you like this video. We can do more videos. That’s only one use case for predicted LTV. There are more. We can also show a demo. There is a lot of content to explore here. So um…
Jas: Yeah there is. I can talk endlessly about this I think for three, four hours straight. We can talk about technical stuff. We can talk about the fraud use case that I didn’t mention. How to set it up. So yeah, stay tuned for more content.
Roman: Exactly. Alright then, thanks a lot Jas and thanks for…
Jas: Thank you so much Roman, yeah! Cheers, bye.