Tara Meyer
11 月 4, 2025
目前大部分预测LTV(pLTV)模型并非专为混合变现应用设计。当应用同时使用应用内购(IAP)和应用内广告(IAA)两种变现方式时,用户行为会变得复杂。传统的LTV模型难以应对,多数pLTV指标也同样无法节省成本、深度洞察。
“目前市场上缺乏支持混合变现应用的有效工具。”
– Jaspreet Bassan
Tenjin高级产品经理
事实上,如今混合变现随处可见,但支持这类变现方式的分析工具却寥寥无几。 IAP+IAA策略 已成为许多应用(尤其是手游)的标准变现模式,但大多数分析平台仍将变现视为单一渠道。这对预测LTV来说是个大问题——毕竟,在为混合游戏做UA时,预测LTV至关重要,你必须精准预测两种不同的收入,才能科学优化预算。而Tenjin为了填补这一空白,推出了针对混变产品的预测LTV。
Tenjin高级产品经理Jaspreet Bassan分享道,构建混合变现的pLTV并不简单,她总结了其中 四大挑战 ,下面就一一详解。
挑战一:信号稀少

预测LTV模型需要大量早期行为数据,才能得到统计上显著且准确的预测。但混合变现模式下,这些关键的早期信号往往十分稀少。
我们的解决方案采用neural networks(神经网络),Jaspreet解释道:
“神经网络非常适合这个场景,它能处理海量数据,这恰恰是这一独特挑战的核心需求。”
“而且,神经网络善于识别复杂模式,且能从趋势中学习,无需大量数据处理。”
她进一步分享了关键成果:
“它可以利用第0天数据学习用户行为特征,
预测接下来30天用户群的表现。”
神经网络能从稀少的早期信号中识别复杂模式,利用细粒度广告曝光数据作为丰富的早期信号源,来 克服信号稀疏 问题。我们从用户一开始与广告的互动入手,提供大量的早期阶段的数据,显著提升预测精准度。
挑战二:鲸鱼用户与其他异常值

建模时常见难题——异常值,比如“鲸鱼用户”(高额付费用户)或其他极端数据,这些都会导致测量失真。例如,某用户在IAP上消费1000美元,而大多数用户只消费10美元。传统pLTV模型往往无法妥善处理这类极端差异,要么给出过于乐观的结果,要么错失潜在收入。
Jaspreet介绍了Tenjin的处理方式:“我们首先将未成熟用户群作为特征之一,将其加入模型,让模型基于此进行预测,并每两小时动态更新用户群成熟度,模型据此不断调整预测。”
通过持续的数据流和监控,周期更新一次采用的动态归一化来应对异常值问题:
我们采用 每两小时刷新的dynamic normalization ,自动识别并校正异常消费行为,确保异常值不影响整体预测”
这意味着普通付费用户不会被忽略,少数大额花费虽存在,但不会导致平均值失真,你的预测LTV画像更贴近现实、更具参考性。
挑战三:不全面的碎片化指标

碎片化的收入数据碎片化会影响决策判断。大多数平台要求你分开跟踪IAP和IAA收入,这增加了工作量——毕竟需要在两个不同的pLTV指标间切换,而实际需要的是将两者结合的整体视角。
Jaspreet指出:“当收入数据被拆分时,你无法快速判断双渠道中哪些用户群真正盈利。容易陷入独立优化的误区,这对混合变现应用尤其不利。”
她解释道:“我们现有客户已经熟悉我们的综合LTV指标——即用户群成熟后可获得的实际LTV。我们知道混合变现应用经常使用这一指标,因此我们推出结合IAA和IAP的预测LTV指标。”她强调:
“这将为客户带来巨大价值。现在我们还能提供衍生的预测指标,如pROAS和pROI,进一步强化了我们的报表和数据基础设施,使真正LTV的预测更简单实用。”
“这将为客户带来巨大价值。现在我们还能提供衍生的预测指标,如pROAS和pROI,进一步强化了我们的报表和数据基础设施,使真正LTV的预测更简单实用。”
最后,Jaspreet分享道:“我们在Tenjin面板中直接创建了一个统一的 pLTV 指标。一个指标即可展示IAA和IAP的总预测价值,默认包含短期和长期预测。”
我们的解决方案为你提供了清晰、全面的可视化分析,无需再在表格中费力计算或勉强拼凑数据来理解利润率,帮助您更快做出决策。
挑战四:“差不多”还不够

多数pLTV模型准确率徘徊70%-80%,看似不错,但当你需要实时分配数千美元预算时,20%-30%的误差是不可接受的。尤其对预算有限的开发者和新兴市场团队更是致命。任何误差都意味着浪费或错失机遇。
UA优化必须建立在强信心基础之上:无法信赖预测结果,就无法大胆投入预算或调整出价。在快速变化的市场中,每一次犹豫都成本高昂。
高准确率的pLTV对日常实时优化决策至关重要:
“达到90%准确率是非常了不起且极为罕见的。”
Jaspreet表示:我们投入了大量精力构建预测LTV,这也是我们能达到 90%准确率 的原因。我们融入了大量的专家洞察和以客户视角为核心的开发理念。”
这一卓越基准通过严格的神经网络训练和分析实现。我们的pLTV支持应用、广告活动及国家级别维度的数据,让你能根据不同国家和渠道分配预算,可能性无限。
这一表现使我们的pLTV成为日常优化的可靠指标。所以,大胆优化吧——带着信心而非谨慎,将预算押在可靠的数据上。
“我们使用Tenjin的预测LTV指标实时监控新用户群的表现,无需等待长期数据,让我们能更快做出用户获取决策。”
– James McClelland, Tapped
阅读全文
Jas: I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
Roman: Hi everyone, welcome to another video about a Tenjin update. Today we’re talking with Jaspret from our product team. Hi Jas.
Jas: Hello Roman.
Roman: And we’re talking about LTV prediction—what it is, why it matters, and how you can access it in Tenjin. So yeah, let’s start with a few words about yourself Jas.
Jas: Thanks, Roman. Hi everyone, I’m Jas or Jaspreet, and I’m the Staff Product Manager at Tenjin. I usually work on the data and dashboard side of things, and we’ve built predictive LTV (pLTV), especially for all of you folks out there.
Roman: Exactly. Uh, we prepared for this video a couple of slides. I’ll just share my screen and we’ll go through them and we’ll have a sort of product conversation.
So yeah, LTV prediction tailored for apps with hybrid monetization. Hybrid is a new hyper now. There is a lot of hype, but we felt like there aren’t enough tools built for that subset of apps. Um, and LTV prediction is kind of an essential tool when you run user acquisition for a hybrid game.
Let’s see why you need to use it. Want to comment on that, Jas?
Jas: Yeah for sure, I think that Roman, you make an excellent point. We do need more tools for hybrid, they tend to be lacking out there. But at Tenjiin, we are super focused on this.
Just to give a small intro kind of thing, focusing on hybrid specifically. So for apps that have pure in‑app ads (IAA), your revenue will most likely mature a lot faster, right? So for instance on day 3 or day 7. Whereas for hybrid apps or apps using a lot of like in‑app purchases (IAP), your IAA revenue might mature faster, but your IAP revenue may take weeks or months to mature.
Which is why we have pLTV for hybrid apps! With pLTV, you don’t need to wait for cohorts to mature. Or, like wait to make your campaign optimizations after day 30, day 60, or longer. Um, instead you can make your campaign optimization decision a lot earlier, right? So instead of waiting that long, waiting for your cohorts to mature, you can act on day two for instance.
Roman: And here I have this hypothetical example of Campaign A and Campaign B. As we can see here on day 0, it’s like Campaign B is a clear winner here. However it might not be the truth for a hybrid app because a user might come in, make a purchase and it will overweigh all of the revenue that had been gained through showing the ads. And here’s how it would look like if there were predictions for day 14.
We’ve already worked with a couple of companies on this. Jas worked closely with James and the team to work on the whole predictive LTV project. And uh yeah, let just read the quote:
“We use Tenjin’s pLTV metrics to monitor the performance of new cohorts in real time without having to wait for long-term data. This allows us to make UA decisions much faster.” – James McClelland, Tapped
This is exactly what Jas just said, right?
Jas: Yeah. Yeah. Um, you know Roman, so when we were doing the product development for predicted LTV we worked a lot with customers, very closely and Tapped was one of the customers even in, like, the Alpha stage. They used our predictions from much earlier and now this product is out for everyone and is available to everyone to use.
We have gotten very similar feedback from almost all our customers. You learn how they’re
using this and they’re able to make user acquisition decisions much much faster, right.
Roman: And here it is again, like, an emphasis on speed. So if I’m waiting for 14 days, then I need to see what campaign actually performs better. If I use prediction LTV before day 14 or whatever day it is, you can reinvest money quicker, as you can see here… you will just get more money at the end.
Roman: Um, maybe for someone who doesn’t know what a hybrid app is, uh let’s talk about that. Jas, what is a hybrid app?
Jas: Cool. Yeah. So, when we say hybrid apps, we mean an app that uses the two revenue models that I quickly introduced to you in the beginning of this video, right? So one of them is um in-app advertising, which basically means your app is making revenue by showing ads to your users. And the other business model is IAP, or in-app purchases. So in-app purchases could be either you are selling products one time, it could be subscriptions, um it could be other things, but you are also making money by selling a product within your app.
So hybrid apps, they do both of these things right? They make money by showing ads and also your users can purchase your products within your app. So you’re able to make money by both ads and purchases.
And when an app uses both these models in tandem, we call them hybrid apps. Now you can see this is an app as an example of the distribution of in-app ads versus in-app purchases. It can vary. It can be like I don’t know 20% IAA and 80% IAP. Um but this is a great example here of what a hybrid app looks like.
Roman: Exactly And it seems like the future is hybrid apps…
Jas: 100%
Roman: Right, so one of the things, but it’s a challenge right, because it’s so new. So it’s a challenge to predict LTV hybrid apps. In the past you might find LTVpredictions for app purchases for IAA apps. So far no one has done the predictions for hybrid apps. So here, we’ve highlighted the word neuronet network. Maybe I’ll ask just for you to comment on that
and talk a little bit more how you approached this challenge.
Jas: Cool. Yeah. So Roman is absolutely right that Tenjin is solving this problem in a very unique manner, because there is a lack of tools out there supporting hybrid apps. We’re focusing on that market right now and you know, Roman, you’re absolutely right that that is the future. That’s what we are seeing from an industry standpoint of view.
And, obviously when it comes to user acquisition decisions or campaign optimization, um however you want to call it, we’re using ad mediation data. We’ll talk more about that later on.
We’re using a lot of highly granular data available at ad impression, user level. Um, and that goes the same with in-app purchase data. And when you’re using a lot of this data, you want a machine learning algorithm that can handle it really well and it can handle this unique challenge
that you’re talking about – to support hybrid apps. Um, and neural networks are a natural choice for it.
I don’t want to go too deep into the technical aspect of what neural networks are, but you can think of neural networks as um, these are actually called artificial neural networks.
They are machine learning, one of the machine learning technologies where you can train your model and then train your model using historical data and that model is able to learn from historical patterns and trends and then make predictions.
Uh I don’t know Roman if you have any more questions I can keep talking about this. Um one thing why we used a neural network versus other things. So in our product development cycle, we did look into using simpler models. We did, you know, use many other benchmarks, a lot of benchmarks.
We’ve done a lot of benchmarks, and for us neural networks were the absolute correct choice because:
A) They’re able to handle large volumes of data really well, which is exactly what this unique challenge proposes and
B) They are able to find patterns, they’re able to learn from trends really well, so you don’t have to do a lot of feature processing.
Why would anyone use Tenjin predictions versus you know doing your own thing? We’ll talk more about that but from a neural network perspective we want this model to learn from data across all apps and all organizations and find patterns. So even if your campaign is new, let’s say, and it doesn’t have any data in the past, our model will learn from similar apps and similar orgs and make accurate predictions for your campaign.
Roman: Yeah, that was my question actually. Let’s go through some challenges for the hybrid apps. You actually already mentioned this, right?
Jas: Yeah, I did. Yeah.
Roman: Uh a lot of signals and we use impressions, ad impressions. Um and we deliver…maybe we can talk about the cohorts. The predictions are available on day zero, right?
Jas: Yes. So uh let me rephrase that actually. Um so our model learns using day zero cohort, whatever data is available and using day zero cohort it will make predictions for all the cohorts in the future until day 30. We are focused on day 30 right now. We’re going to add, well – we are in the middle of adding support for longer cohorts until day 365.
But that’s what we’re doing right now: it will take your day zero data, learn patterns from it and they make predictions for cohorts until day 30.
Roman: And I guess one of the most important ones we’re showing, if we’re talking about the apps showing ads, is like an ad impression, right?
Jas: Yeah. Because this is a campaign, the use case is you know user acquisition and campaign optimization. We are using very granular ad impressions as one of the features. We have a lot of features that we’re using, like around a hundred, and this is one of them.
Roman: Yeah. Yeah. Yeah. This is just to answer one of the questions that I had, also internally. How do we make sure that early predictions, (because from the hyper casual days, I know this is one of the super important factors to understand) how the campaign performs early on. Like you need both early and late and this is why and how we do this.
The next challenge (it was mentioned in the example from one of the first slides). We had two campaigns and at some point Campaign A actually got better than Campaign B, presumably thanks to purchase. Right. And, so we update a prediction every two hours. Can you explain what it means?
Jas: Yeah. So we are using a lot of your data, like hundreds of features: the distributions of your ad impressions, how your campaign is performing based on the changes you’ve made , to your campaign like bids etc. And other real things happening right outside of your whole entire UA strategy.
So all that data goes into the model and then the model will make new predictions based on it. Especially when your cohorts are not mature, right? Once your cohorts are mature, the model has a lot of that data already and has made very good and accurate predictions. But, especially when you’re making any changes, you know, any changes are happening to your campaigns. This is why we make predictions quickly, to factor all the real world changes happening or distribution changes that are happening and then we give you fresh predictions that reflect the reality of that world.
Roman: Is it the same for the first two hours, so I’ve started a campaign. I’m starting to see the data for the campaign. Does it mean that the next two hours after I see it, I should get some LTV predictions or I need to wait longer?
Jas: So Roman, my understanding is that we need a mature like day zero data, and then we’re able to make predictions for 1 to 30. So let’s say that you come back on day one, right? So on day one and to day 30, obviously your cohorts haven’t matured.
We use the immature cohorts also as one of the features and we add that data to the model and then the model will predict based on it. Right? So every two hours, your cohorts are also maturing and the model takes that into account and it will give you new predictions, you know for like day one to day 30. Um so that’s what we are basically doing.
Roman: Super gotcha. Uh challenge number three as I mentioned, there were already products that can do prediction for IIA or IIP.
Jas: Mhm. We decided to do it prediction in one metric for both and we made it available in the dashboard.
Roman: Maybe Jas, you can take us through some of the thinking when this was decided… Why do we do that?
Jas: Yeah. So for product management development, the way we wanted to approach this problem was let’s build, you know, an Alpha really quickly. And, let’s have some of our customers drive the feedback and then evolve the product organically.
While we were doing this, we received a lot of the early feedback. That early feedback is very critical for making a good product for LTV prediction specifically. We want to build something quick that solves the use case for hybrid, so we went for this one metric.
So our existing customers know that we have LTV, a combined LTV metric which is the actual LTV that is available after your cohorts have matured. It is also available for immature cohort data.
And we know that hybrid apps use that metric a lot. So, we wanted to provide um a similar metric that combines both in-app ads and IAP.
For Tenjin specifically, our reporting data and infrastructure is built in a way where we already have this historical or actual LTV available. We ended up making predictions for that metric. Think of that metric as your Y axis in machine learning terminology, and then you will use all the historical trends, and then make a prediction for that right, and that’s what we are doing.
So A) it was a decision from a product perspective because we know our customers use this and this is going to be very impactful for them. And then using this metric we can now provide derivative predicted metrics like pROAS and pROI. But obviously our infrastructure, the reporting infrastructure and data infrastructure is built in such a way where making predictions for this actual LTV is a lot easier.
Roman: Gotcha. Now to the last challenge for hybrid apps, which is accuracy. We have this stunning accuracy, average accuracy of 90%.
Jas: Yes.
Roman: Um, was it always like that? Did we have to work on that? Maybe you have some benchmark on the industry.
Jas: Yeah. Um so this is a great question and I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
This wasn’t always the case. Like I said, I mentioned briefly at the beginning of this video, like how we did product development. We started with Alpha beta production. Obviously for Alpha the accuracy was not as good.
Before Alpha was even started, we did internal benchmarking. Our GFS had predictions and we benchmarked them, and then we did our own internal benchmarking with other machine learning models. And then the way machine learning development goes is you want to look at different features: see which one is giving you good accuracy, which one ruins your accuracy, do some feature engineering, understand your data really well, understand your business really well.
This is a very organic way of how we did things, right? Like okay, how do customers make decisions? Talk to experts for instance. So a lot of that has gone into building predicted LTV and which is why I think this 90% accuracy that we received was made possible. It is because we added a lot of that expert insight into this and approached this problem from the customer perspective.
How do they make decisions? We involved them early on, so it was a long process. It wasn’t an easy process to achieve that. We worked very hard to get that 90% accuracy. That said, I do want to mention that although our average performance of the model is 90%, the accuracy varies from campaign to campaign, app to app and country to country.
I think I forgot to mention this but our predictions are available at app, campaign, and country level. So you can allocate a budget based on different countries for your campaigns, different campaigns, different channels, right? Like if your channel A is not performing for this country, you can do all sorts of things for all sorts of like countries and channels and campaigns.
Um, when it comes to accuracy of a machine learning model, it can vary from different campaigns to different countries, right? 90% is the average. So far, our customers have said they’ve seen 90% accuracy which is great but I do want to add this caveat that your accuracy can depend on your country and your campaign and channels, which is why we’re thinking of adding confidence intervals in the future. So this will help you make your decisions a lot more confidently.
Roman: Right. Right. Yeah, I think this is an important point to emphasize. Again like it’s an average even though it’s a great result. Uh but it really depends on a lot of factors. It’s still a prediction. Cool. Uh I think that was the last challenge, now the main question is how to get started.
It’s available on all paid plans. So now, we actually started with a cancel anytime feature. So you can start with us, pay 200 bucks and get access to predictive LTV. If you’re already a Tenjin customer, then you already have access to it.
Just click on the edit metric on the dashboard, find predicted LTV metrics (pLTV), predicted ROAS (pROAS), and predicted ROI (pROI). So no feature gates, just try it out and see how it works. If you already tried it, leave us a comment. If you have any questions, also leave us a comment.
Any last thoughts, Jas?
Jas: Yeah, I just want to say Roman, thank you for highlighting that. Using Tenjin is now easier than ever. You just need to go on our dashboard. You just need to subscribe to our $200 plan and you can cancel it whenever you want. Um, so if you’re interested, try this out. Um, yep. And I we’d love to get your feedback on this, or anything else.
Roman: Super. Thanks a lot, Jess. Give us a like if you like this video. We can do more videos. That’s only one use case for predicted LTV. There are more. We can also show a demo. There is a lot of content to explore here. So um…
Jas: Yeah there is. I can talk endlessly about this I think for three, four hours straight. We can talk about technical stuff. We can talk about the fraud use case that I didn’t mention. How to set it up. So yeah, stay tuned for more content.
Roman: Exactly. Alright then, thanks a lot Jas and thanks for…
Jas: Thank you so much Roman, yeah! Cheers, bye.