You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# AI Decisions, Ranking, Scoring, and Multivariate Optimization for Android/Java
2
+
3
+
Improve AI is a machine learning platform for quickly implementing app optimization, personalization, and recommendations for [iOS](https://improve.ai/ios-sdk/), [Android](https://improve.ai/android-sdk/), and [Python](https://improve.ai/python-sdk/).
4
+
5
+
The SDKs provide simple APIs for AI [decisions](https://improve.ai/decisions/), [ranking](https://improve.ai/ranking/), [scoring](https://improve.ai/scoring/), and [multivariate optimization](https://improve.ai/multivariate-optimization/) that execute immediately, on-device, with zero network latency.
2
6
3
-
Improve AI provides quick on-device AI decisions that get smarter over time. It's like an AI *if/then* statement. Replace guesses in your app's configuration with AI decisions to increase your app's revenue, user retention, or any other metric automatically.
4
7
5
8
## Installation
6
-
### Step 1: Add JitPack in your root build.gradle at the end of repositories
9
+
10
+
Add JitPack in your root build.gradle at the end of repositories
11
+
7
12
```gradle
8
13
allprojects {
9
14
repositories {
@@ -13,187 +18,181 @@ allprojects {
13
18
}
14
19
```
15
20
16
-
### Step 2: Add the dependency in your app/build.gradle file
*which* makes decisions on-device using a *decision model*. Decision models are easily trained by assigning rewards for positive outcomes.
65
+
*which()* takes a list of *variants* and returns the best - the "best" being the variant that provides the highest expected reward given the current conditions.
66
+
67
+
Decision models are easily trained with [reinforcement learning](https://improve.ai/reinforcement-learning/):
56
68
57
69
```Java
58
70
if (success) {
59
71
DecisionModel.get("greetings").addReward(1.0);
60
72
}
61
73
```
62
74
63
-
Rewards are credited to the most recent decision made by the model. *which* will make the decision that provides the highest expected reward. When the rewards are business metrics, such as revenue or user retention, the decisions will optimize to automatically improve those metrics over time.
64
-
65
-
*That's like A/B testing on steroids.*
66
-
67
-
### Numbers Too
75
+
With reinforcement learning, positive rewards are assigned for positive outcomes (a "carrot") and negative rewards are assigned for undesirable outcomes (a "stick").
68
76
69
-
What discount should we offer?
77
+
*which()* automatically tracks it's decision with the [Improve AI Gym](https://github.com/improve-ai/gym/). Rewards are credited to the most recent tracked decision for each model, including from a previous app session.
70
78
71
-
```Java
72
-
discount = decisionModel.which(0.1, 0.2, 0.3);
73
-
```
79
+
## Contextual Decisions
74
80
75
-
### Booleans
81
+
Unlike A/B testing or feature flags, Improve AI uses *context* to make the best decision for each user. On Android, the following context is automatically included:
76
82
77
-
Dynamically enable feature flags for best performance...
83
+
-*$country* - two letter country code
84
+
-*$lang* - two letter language code
85
+
-*$tz* - numeric GMT offset
86
+
-*$carrier* - cellular network
87
+
-*$device* - string portion of device model
88
+
-*$devicev* - device version
89
+
-*$os* - string portion of OS name
90
+
-*$osv* - OS version
91
+
-*$pixels* - screen width x screen height
92
+
-*$app* - app name
93
+
-*$appv* - app version
94
+
-*$sdkv* - Improve AI SDK version
95
+
-*$weekday* - (ISO 8601, monday==1.0, sunday==7.0) plus fractional part of day
When a single Array argument is passed to which, it is treated as a list of variants.
112
+
Given the language is *cowboy*, the variant with the highest expected reward should be *Howdy* and the model would learn to make that choice.
93
113
94
-
Improve learns to use the attributes of each key and value in a complex variant to make the optimal decision.
114
+
## Ranking
95
115
96
-
Variants can be any JSON encodeable data structure of arbitrary complexity, including nested dictionaries, arrays, strings, numbers, nulls, and booleans.
116
+
[Ranking](https://improve.ai/ranking/) is a fundamental task in recommender systems, search engines, and social media feeds. Fast ranking can be performed on-device in a single line of code:
Unlike A/B testing or feature flags, Improve AI uses *context* to make the best decision for each user. On Android, the following context is automatically included:
122
+
**Note**: Decisions are not tracked when calling *rank()*. *which()* or *decide()* must be used to train models for ranking.
101
123
102
-
- $country - two letter country code
103
-
- $lang - two letter language code
104
-
- $tz - numeric GMT offset
105
-
- $carrier - cellular network
106
-
- $device - string portion of device model
107
-
- $devicev - device version
108
-
- $os - string portion of OS name
109
-
- $osv - OS version
110
-
- $pixels - screen width x screen height
111
-
- $app - app name
112
-
- $appv - app version
113
-
- $sdkv - Improve AI SDK version
114
-
- $weekday - (ISO 8601, monday==1.0, sunday==7.0) plus fractional part of day
115
-
- $time - fractional day since midnight
116
-
- $runtime - fractional days since session start
117
-
- $day - fractional days since born
118
-
- $d - the number of decisions for this model
119
-
- $r - total rewards for this model
120
-
- $r/d - total rewards/decisions
121
-
- $d/day - decisions/$day
124
+
## Scoring
122
125
123
-
Using the context, on a Spanish speaker's device we expect our *greetings* model to learn to choose *Hola*.
126
+
[Scoring](https://improve.ai/scoring/) makes it easy to turn any database table into a recommendation engine.
124
127
125
-
Custom context can also be provided via *given()*:
128
+
Simply add a *score* column to the database and update the score for each row.
Given the language is *cowboy*, the variant with the highest expected reward should be *Howdy* and the model would learn to make that choice.
134
+
At query time, sort the query results descending by the *score* column and the first results will be the top recommendations. This works particularly well with local databases on mobile devices where the scores can be personalized to each individual user.
132
135
133
-
## Decision Models
136
+
*score()* is also useful for crafting custom optimization algorithms or providing supplemental metrics in a multi-stage recommendation system.
134
137
135
-
## Example: Optimizing an Upsell Offer
138
+
**Note**: Decisions are not tracked when calling *score()*. *which()*, *decide()*, or *optimize()* must be used to train models for scoring.
136
139
137
-
Improve AI is powerful and flexible. Variants can be any JSON encodeable data structure including **strings**, **numbers**, **booleans**, **lists**, and **maps**.
140
+
## Multivariate Optimization
138
141
139
-
For a dungeon crawler game, say the user was purchasing an item using an In App Purchase. We can use Improve AI to choose an additional product to display as an upsell offer during checkout. With a few lines of code, we can train a model that will learn to optimize the upsell offer given the original product being purchased.
142
+
[Multivariate optimization](https://improve.ai/multivariate-optimization/) is the joint optimization of multiple variables simultaneously. This is often useful for app configuration and performance tuning.
The product to be purchased is the **red sword**.Notice that the variants are maps with a mix of string and numeric values.
150
149
151
-
The rewards in thiscase might be any additional revenue from the upsell.
150
+
This example decides multiple variables simultaneously. Notice that instead of a single list of variants, a mapping of keys to arrays of variants is provided. This multi-variate mode jointly optimizes all variables forthe highest expected reward.
152
151
153
-
```Java
154
-
if (upsellPurchased) {
155
-
upsellModel.addReward(upsell.price);
156
-
}
157
-
```
152
+
*optimize()* automatically tracks it's decision with the [Improve AI Gym](https://github.com/improve-ai/gym/). Rewards are credited to the most recent decision made by the model, including from a previous app session.
158
153
159
-
While it is reasonable to hypothesize that the **red scabbord** might be the best upsell offer to pair with the **red sword**, it is still a guess. Any time a guess is made on the value of a variable, instead use ImproveAI to decide.
154
+
## Variant Types
160
155
161
-
*Replace guesses with AI decisions.*
156
+
Variants and givens can be any JSON encodable object. This includes *Integer*, *Double*, *Boolean*, *String*, *Map*, *List*, and *null*. Nested values within collections are automatically encoded as machine learning features to assist in the decision making process.
162
157
163
-
## Example:PerformanceTuning
158
+
The following are all valid:
164
159
165
-
In the 2000s I was writing a lot of video streaming code. The initial motivation forImproveAI came out of my frustrations with attempting to tune video streaming clients across heterogenious networks.
I was forced to make guesses on performance sensitive configuration defaults through slow trial and error. My client configuration code maybe looked something like this:
This example decides multiple variables simultaneously. Notice that instead of a single list of variants, a dictionary mapping keys to lists of variants is provided to *which*.This multi-variate mode jointly optimizes both variables for the highest expected reward.
The rewards in thiscase might be negative to penalize any stalls during video playback.
183
-
```Java
184
-
if (videoStalled) {
185
-
configModel.addReward(-0.001);
186
-
}
173
+
theme = themeModel.which(themes)
187
174
```
188
175
189
-
ImproveAI frees us from having to overthink our configuration values during development. We simply give it some reasonable variants and let it learn from real world usage.
176
+
## Privacy
177
+
178
+
It is strongly recommended to never include Personally Identifiable Information (PII) in variants or givens so that it is never tracked, persisted, or used as training data.
190
179
191
-
Lookfor places where you're relying on guesses or an executive decision and consider instead directly optimizing for the outcomes you desire.
It is strongly recommended to never include Personally Identifiable Information (PII) in variants or givens so that it is never tracked, persisted, or used as training data.
196
193
197
194
## Help Improve Our World
198
195
199
-
The mission of Improve AI is to make our corner of the world a little bit better each day. When each of us improve our corner of the world, the whole world becomes better. If your product or work does not make the world better, do not use Improve AI. Otherwise, welcome, I hope you find value in my labor of love. - Justin Chapweske
196
+
The mission of Improve AI is to make our corner of the world a little bit better each day. When each of us improve our corner of the world, the whole world becomes better. If your product or work does not make the world better, do not use Improve AI. Otherwise, welcome, I hope you find value in my labor of love.
0 commit comments