Tara Meyer
11月 4, 2025
ご存知ですか?ほとんどの予測LTV (pLTV)モデルは、ハイブリッド収益化アプリ向けに設計されていません。アプリ内課金 (IAP) とアプリ内広告の両方で収益化を行う場合、ユーザー行動は急速に複雑化します。標準的なLTVモデルでは対応できず、ほとんどのpLTV指標も同様です。
「ハイブリッドマネタイズアプリをサポートするツールが市場に不足している」
– Jaspreet Bassan
Tenjinのシニア・プロダクトマネージャー
問題は、ハイブリッド収益化が今や至る所で見られる一方で、それを実現するツールはそれほど普及していない点です。 IAP+IAA戦略 は多くのアプリやモバイルゲームで標準化しましたが、ほとんどの分析プラットフォームは収益化を単一チャネルとして扱い続けています。これは、ハイブリッドゲームのユーザー獲得(UA)に不可欠な予測LTVにとって深刻な問題です。両方の収益チャネルにまたがる正確な予測が必要で、予算を効果的に最適化するには適切なツールが欠かせません。そこで私たちは、市場の穴を埋めるソリューションを作りました。
しかし、ハイブリッドマネタイズ向けのpLTV構築は簡単ではないと、TenjinのJaspreet Bassanは説明します。彼女は、その過程で直面する 4つの大きな課題 をご紹介します。
課題①:シグナルの不足

予測LTVモデルには、統計的に意味のある正確な結果を出すために大量の初期シグナルが必要です。ハイブリッドマネタイズでは、その予測が最も必要な初期段階でシグナルが非常に少なくなりがちです。
当社の解決策はニューラルネットワークの活用です。Jaspreetが次のように説明しています:
「私たちにとってニューラルネットワークは絶対的な正解でした。なぜなら、大量のデータを非常にうまく処理できるからです。まさにこのユニークな課題が求めていることそのものです。」
「そして、それらはパターンを発見することができます。トレンドからも非常に効果的に学習できるため、処理を多く行う必要がありません。」
彼女は重要な成果を共有し続けています:
「0日目のデータから学習して、
30日目までのコホート予測が可能です。」
ニューラルネットは初期シグナルの不足から複雑なパターンを識別し、広告の細かなインプレッションデータを豊富な初期シグナル源として活用して限られたデータ 問題を克服します 。ユーザーの広告との接触開始時からの挙動を詳細に捉え、高精度予測を実現しています。
課題②:Whaleユーザーや異常値

モデル構築における典型的な問題は異常値の存在です。大口課金ユーザー (Whale)や極端な値が測定を歪めます。例えば、あるユーザーはIAPに1000ドル使う一方で、多くは10ドル程度。従来型のpLTVモデルはこの差異にうまく対処できず、過度に楽観的な結果を出したり、潜在収益を見落としたりします。
Jaspreetのチームのアプローチはこうです:「未成熟コホートをインプットの一つとし、モデルに取り込み予測に反映させます。コホートは2時間ごとに成熟度が更新され、予測も随時調整されます。」
継続的なデータストリームと監視が行われています。外れ値の問題を克服するため、定期的に更新される動的正規化が適用されています:
データ内の異常値を考慮するため、 2時間ごとに更新されるdynamic normalization手法を採用しています 。これにより、外れ値や異常な支出パターンを自動的に調整することが可能です。」
つまり、カジュアルな支出者も依然として重要ですが、平均値を下げることはありません。また、少数の高額支出者に基づいて価値を過大評価することもありません。予測されるLTV(生涯価値)の全体像はより現実的で、適切に考慮されているのです。
課題③:分断された指標では全体像が見えない

断片化された収益分析は情報格差を生みます。大半のプラットフォームでは、アプリ内課金(IAP)と広告収益(IAA)を別々に追跡せざるを得ません。これにより作業が増大します。本来は統合すべき単一の包括的視点であるべきところを、二つの異なるpLTV指標を同時に管理しなければならないからです。
Jaspreetによると、「収益データが分断されていると、どのユーザーセグメントが両チャネルを通じて利益を生んでいるのか把握しにくくなります。個別最適化に陥りやすく、特にハイブリッドマネタイズアプリには好ましくありません。」
彼女は次のように説明している。「既存のお客様は、当社が統合LTV指標を採用していることをご存知です。これはコホートが成熟した後に得られる実際のLTVです。また、ハイブリッドアプリがこの指標を頻繁に利用していることも承知しています。そこで、IAAとIAPの両方を統合した同様の指標を提供したいと考えました。」彼女は次のように強調している:
「これはお客様にとって非常に大きな影響をもたらすでしょう。さらに、pROASやpROIといった派生予測指標も提供可能になりました。これにより、当社の全体的なインフラストラクチャ——レポート基盤とデータ基盤——が強化され
、実際のLTV予測も格段に容易かつ有意義に行える仕組みが構築されました。」
最終的に、Tenjinダッシュボードから直接アクセス可能な単一の統合 pLTV 指標を作成しましたとJaspreetは説明します。IAAとIAPの両方における予測総価値を反映した単一指標が得られ、デフォルトで短期および長期の予測が含まれています。
当社のソリューションは、明確かつ包括的な可視性を実現します。利益率を把握するために、複雑なスプレッドシート操作や統計の無理な解釈は不要です。より迅速に行動でき、その判断に確信を持てます。
課題④:「まあまあ」では不十分

多くのpLTVモデルの精度は70~80%程度です。これは一見悪くないようですが、数千ドル規模のリアルタイム予算配分判断には20~30%の誤差は容認できません。特に予算にシビアな開発者や新興市場にとって大きな痛手です。不正確な予測は資金のムダ遣いかチャンス逸失を意味します。
UAの最適化には確信が必要です。予測分析を信頼できなければ、予算再配分や入札調整、キャンペーン拡大に踏み切れません。スピードが命のマーケットでの躊躇は大きな損失です。
高精度pLTVは日々のリアルタイム最適化で不可欠です:
「90%の精度は業界でも極めて稀有で素晴らしい成果です。」
Jaspreetはこう語ります:「この 90%精度 は多くの専門知見を投入し、顧客視点を持って構築した結果です。」
徹底したニューラルネットワークのトレーニングと分析により実現しました。さらにpLTVはアプリ単位、キャンペーン単位、国単位でも利用可能。国ごと・チャネルごとに予算配分でき、活用の幅が広がります。
この高いパフォーマンス水準により、pLTVは日常的な最適化判断に十分信頼できる指標となりました。どうぞ自信をもって最適化を進め、UA予算を確かな指標に賭けてください。
「TenjinのpLTV指標で新規コホートのパフォーマンスをリアルタイムで監視し、長期データを待つことなく迅速にUA判断が可能になりました。」
– James McClelland, Tapped
全文をお読みください。
Jas: I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
Roman: Hi everyone, welcome to another video about a Tenjin update. Today we’re talking with Jaspret from our product team. Hi Jas.
Jas: Hello Roman.
Roman: And we’re talking about LTV prediction—what it is, why it matters, and how you can access it in Tenjin. So yeah, let’s start with a few words about yourself Jas.
Jas: Thanks, Roman. Hi everyone, I’m Jas or Jaspreet, and I’m the Staff Product Manager at Tenjin. I usually work on the data and dashboard side of things, and we’ve built predictive LTV (pLTV), especially for all of you folks out there.
Roman: Exactly. Uh, we prepared for this video a couple of slides. I’ll just share my screen and we’ll go through them and we’ll have a sort of product conversation.
So yeah, LTV prediction tailored for apps with hybrid monetization. Hybrid is a new hyper now. There is a lot of hype, but we felt like there aren’t enough tools built for that subset of apps. Um, and LTV prediction is kind of an essential tool when you run user acquisition for a hybrid game.
Let’s see why you need to use it. Want to comment on that, Jas?
Jas: Yeah for sure, I think that Roman, you make an excellent point. We do need more tools for hybrid, they tend to be lacking out there. But at Tenjiin, we are super focused on this.
Just to give a small intro kind of thing, focusing on hybrid specifically. So for apps that have pure in‑app ads (IAA), your revenue will most likely mature a lot faster, right? So for instance on day 3 or day 7. Whereas for hybrid apps or apps using a lot of like in‑app purchases (IAP), your IAA revenue might mature faster, but your IAP revenue may take weeks or months to mature.
Which is why we have pLTV for hybrid apps! With pLTV, you don’t need to wait for cohorts to mature. Or, like wait to make your campaign optimizations after day 30, day 60, or longer. Um, instead you can make your campaign optimization decision a lot earlier, right? So instead of waiting that long, waiting for your cohorts to mature, you can act on day two for instance.
Roman: And here I have this hypothetical example of Campaign A and Campaign B. As we can see here on day 0, it’s like Campaign B is a clear winner here. However it might not be the truth for a hybrid app because a user might come in, make a purchase and it will overweigh all of the revenue that had been gained through showing the ads. And here’s how it would look like if there were predictions for day 14.
We’ve already worked with a couple of companies on this. Jas worked closely with James and the team to work on the whole predictive LTV project. And uh yeah, let just read the quote:
“We use Tenjin’s pLTV metrics to monitor the performance of new cohorts in real time without having to wait for long-term data. This allows us to make UA decisions much faster.” – James McClelland, Tapped
This is exactly what Jas just said, right?
Jas: Yeah. Yeah. Um, you know Roman, so when we were doing the product development for predicted LTV we worked a lot with customers, very closely and Tapped was one of the customers even in, like, the Alpha stage. They used our predictions from much earlier and now this product is out for everyone and is available to everyone to use.
We have gotten very similar feedback from almost all our customers. You learn how they’re
using this and they’re able to make user acquisition decisions much much faster, right.
Roman: And here it is again, like, an emphasis on speed. So if I’m waiting for 14 days, then I need to see what campaign actually performs better. If I use prediction LTV before day 14 or whatever day it is, you can reinvest money quicker, as you can see here… you will just get more money at the end.
Roman: Um, maybe for someone who doesn’t know what a hybrid app is, uh let’s talk about that. Jas, what is a hybrid app?
Jas: Cool. Yeah. So, when we say hybrid apps, we mean an app that uses the two revenue models that I quickly introduced to you in the beginning of this video, right? So one of them is um in-app advertising, which basically means your app is making revenue by showing ads to your users. And the other business model is IAP, or in-app purchases. So in-app purchases could be either you are selling products one time, it could be subscriptions, um it could be other things, but you are also making money by selling a product within your app.
So hybrid apps, they do both of these things right? They make money by showing ads and also your users can purchase your products within your app. So you’re able to make money by both ads and purchases.
And when an app uses both these models in tandem, we call them hybrid apps. Now you can see this is an app as an example of the distribution of in-app ads versus in-app purchases. It can vary. It can be like I don’t know 20% IAA and 80% IAP. Um but this is a great example here of what a hybrid app looks like.
Roman: Exactly And it seems like the future is hybrid apps…
Jas: 100%
Roman: Right, so one of the things, but it’s a challenge right, because it’s so new. So it’s a challenge to predict LTV hybrid apps. In the past you might find LTVpredictions for app purchases for IAA apps. So far no one has done the predictions for hybrid apps. So here, we’ve highlighted the word neuronet network. Maybe I’ll ask just for you to comment on that
and talk a little bit more how you approached this challenge.
Jas: Cool. Yeah. So Roman is absolutely right that Tenjin is solving this problem in a very unique manner, because there is a lack of tools out there supporting hybrid apps. We’re focusing on that market right now and you know, Roman, you’re absolutely right that that is the future. That’s what we are seeing from an industry standpoint of view.
And, obviously when it comes to user acquisition decisions or campaign optimization, um however you want to call it, we’re using ad mediation data. We’ll talk more about that later on.
We’re using a lot of highly granular data available at ad impression, user level. Um, and that goes the same with in-app purchase data. And when you’re using a lot of this data, you want a machine learning algorithm that can handle it really well and it can handle this unique challenge
that you’re talking about – to support hybrid apps. Um, and neural networks are a natural choice for it.
I don’t want to go too deep into the technical aspect of what neural networks are, but you can think of neural networks as um, these are actually called artificial neural networks.
They are machine learning, one of the machine learning technologies where you can train your model and then train your model using historical data and that model is able to learn from historical patterns and trends and then make predictions.
Uh I don’t know Roman if you have any more questions I can keep talking about this. Um one thing why we used a neural network versus other things. So in our product development cycle, we did look into using simpler models. We did, you know, use many other benchmarks, a lot of benchmarks.
We’ve done a lot of benchmarks, and for us neural networks were the absolute correct choice because:
A) They’re able to handle large volumes of data really well, which is exactly what this unique challenge proposes and
B) They are able to find patterns, they’re able to learn from trends really well, so you don’t have to do a lot of feature processing.
Why would anyone use Tenjin predictions versus you know doing your own thing? We’ll talk more about that but from a neural network perspective we want this model to learn from data across all apps and all organizations and find patterns. So even if your campaign is new, let’s say, and it doesn’t have any data in the past, our model will learn from similar apps and similar orgs and make accurate predictions for your campaign.
Roman: Yeah, that was my question actually. Let’s go through some challenges for the hybrid apps. You actually already mentioned this, right?
Jas: Yeah, I did. Yeah.
Roman: Uh a lot of signals and we use impressions, ad impressions. Um and we deliver…maybe we can talk about the cohorts. The predictions are available on day zero, right?
Jas: Yes. So uh let me rephrase that actually. Um so our model learns using day zero cohort, whatever data is available and using day zero cohort it will make predictions for all the cohorts in the future until day 30. We are focused on day 30 right now. We’re going to add, well – we are in the middle of adding support for longer cohorts until day 365.
But that’s what we’re doing right now: it will take your day zero data, learn patterns from it and they make predictions for cohorts until day 30.
Roman: And I guess one of the most important ones we’re showing, if we’re talking about the apps showing ads, is like an ad impression, right?
Jas: Yeah. Because this is a campaign, the use case is you know user acquisition and campaign optimization. We are using very granular ad impressions as one of the features. We have a lot of features that we’re using, like around a hundred, and this is one of them.
Roman: Yeah. Yeah. Yeah. This is just to answer one of the questions that I had, also internally. How do we make sure that early predictions, (because from the hyper casual days, I know this is one of the super important factors to understand) how the campaign performs early on. Like you need both early and late and this is why and how we do this.
The next challenge (it was mentioned in the example from one of the first slides). We had two campaigns and at some point Campaign A actually got better than Campaign B, presumably thanks to purchase. Right. And, so we update a prediction every two hours. Can you explain what it means?
Jas: Yeah. So we are using a lot of your data, like hundreds of features: the distributions of your ad impressions, how your campaign is performing based on the changes you’ve made , to your campaign like bids etc. And other real things happening right outside of your whole entire UA strategy.
So all that data goes into the model and then the model will make new predictions based on it. Especially when your cohorts are not mature, right? Once your cohorts are mature, the model has a lot of that data already and has made very good and accurate predictions. But, especially when you’re making any changes, you know, any changes are happening to your campaigns. This is why we make predictions quickly, to factor all the real world changes happening or distribution changes that are happening and then we give you fresh predictions that reflect the reality of that world.
Roman: Is it the same for the first two hours, so I’ve started a campaign. I’m starting to see the data for the campaign. Does it mean that the next two hours after I see it, I should get some LTV predictions or I need to wait longer?
Jas: So Roman, my understanding is that we need a mature like day zero data, and then we’re able to make predictions for 1 to 30. So let’s say that you come back on day one, right? So on day one and to day 30, obviously your cohorts haven’t matured.
We use the immature cohorts also as one of the features and we add that data to the model and then the model will predict based on it. Right? So every two hours, your cohorts are also maturing and the model takes that into account and it will give you new predictions, you know for like day one to day 30. Um so that’s what we are basically doing.
Roman: Super gotcha. Uh challenge number three as I mentioned, there were already products that can do prediction for IIA or IIP.
Jas: Mhm. We decided to do it prediction in one metric for both and we made it available in the dashboard.
Roman: Maybe Jas, you can take us through some of the thinking when this was decided… Why do we do that?
Jas: Yeah. So for product management development, the way we wanted to approach this problem was let’s build, you know, an Alpha really quickly. And, let’s have some of our customers drive the feedback and then evolve the product organically.
While we were doing this, we received a lot of the early feedback. That early feedback is very critical for making a good product for LTV prediction specifically. We want to build something quick that solves the use case for hybrid, so we went for this one metric.
So our existing customers know that we have LTV, a combined LTV metric which is the actual LTV that is available after your cohorts have matured. It is also available for immature cohort data.
And we know that hybrid apps use that metric a lot. So, we wanted to provide um a similar metric that combines both in-app ads and IAP.
For Tenjin specifically, our reporting data and infrastructure is built in a way where we already have this historical or actual LTV available. We ended up making predictions for that metric. Think of that metric as your Y axis in machine learning terminology, and then you will use all the historical trends, and then make a prediction for that right, and that’s what we are doing.
So A) it was a decision from a product perspective because we know our customers use this and this is going to be very impactful for them. And then using this metric we can now provide derivative predicted metrics like pROAS and pROI. But obviously our infrastructure, the reporting infrastructure and data infrastructure is built in such a way where making predictions for this actual LTV is a lot easier.
Roman: Gotcha. Now to the last challenge for hybrid apps, which is accuracy. We have this stunning accuracy, average accuracy of 90%.
Jas: Yes.
Roman: Um, was it always like that? Did we have to work on that? Maybe you have some benchmark on the industry.
Jas: Yeah. Um so this is a great question and I want to say our average is, and Roman, you emphasize the word average, which is, you know, the right way of talking about model accuracy, really well. Um that’s actually really good. A 90% accuracy is fantastic and something that is really rare in this industry.
This wasn’t always the case. Like I said, I mentioned briefly at the beginning of this video, like how we did product development. We started with Alpha beta production. Obviously for Alpha the accuracy was not as good.
Before Alpha was even started, we did internal benchmarking. Our GFS had predictions and we benchmarked them, and then we did our own internal benchmarking with other machine learning models. And then the way machine learning development goes is you want to look at different features: see which one is giving you good accuracy, which one ruins your accuracy, do some feature engineering, understand your data really well, understand your business really well.
This is a very organic way of how we did things, right? Like okay, how do customers make decisions? Talk to experts for instance. So a lot of that has gone into building predicted LTV and which is why I think this 90% accuracy that we received was made possible. It is because we added a lot of that expert insight into this and approached this problem from the customer perspective.
How do they make decisions? We involved them early on, so it was a long process. It wasn’t an easy process to achieve that. We worked very hard to get that 90% accuracy. That said, I do want to mention that although our average performance of the model is 90%, the accuracy varies from campaign to campaign, app to app and country to country.
I think I forgot to mention this but our predictions are available at app, campaign, and country level. So you can allocate a budget based on different countries for your campaigns, different campaigns, different channels, right? Like if your channel A is not performing for this country, you can do all sorts of things for all sorts of like countries and channels and campaigns.
Um, when it comes to accuracy of a machine learning model, it can vary from different campaigns to different countries, right? 90% is the average. So far, our customers have said they’ve seen 90% accuracy which is great but I do want to add this caveat that your accuracy can depend on your country and your campaign and channels, which is why we’re thinking of adding confidence intervals in the future. So this will help you make your decisions a lot more confidently.
Roman: Right. Right. Yeah, I think this is an important point to emphasize. Again like it’s an average even though it’s a great result. Uh but it really depends on a lot of factors. It’s still a prediction. Cool. Uh I think that was the last challenge, now the main question is how to get started.
It’s available on all paid plans. So now, we actually started with a cancel anytime feature. So you can start with us, pay 200 bucks and get access to predictive LTV. If you’re already a Tenjin customer, then you already have access to it.
Just click on the edit metric on the dashboard, find predicted LTV metrics (pLTV), predicted ROAS (pROAS), and predicted ROI (pROI). So no feature gates, just try it out and see how it works. If you already tried it, leave us a comment. If you have any questions, also leave us a comment.
Any last thoughts, Jas?
Jas: Yeah, I just want to say Roman, thank you for highlighting that. Using Tenjin is now easier than ever. You just need to go on our dashboard. You just need to subscribe to our $200 plan and you can cancel it whenever you want. Um, so if you’re interested, try this out. Um, yep. And I we’d love to get your feedback on this, or anything else.
Roman: Super. Thanks a lot, Jess. Give us a like if you like this video. We can do more videos. That’s only one use case for predicted LTV. There are more. We can also show a demo. There is a lot of content to explore here. So um…
Jas: Yeah there is. I can talk endlessly about this I think for three, four hours straight. We can talk about technical stuff. We can talk about the fraud use case that I didn’t mention. How to set it up. So yeah, stay tuned for more content.
Roman: Exactly. Alright then, thanks a lot Jas and thanks for…
Jas: Thank you so much Roman, yeah! Cheers, bye.