Skip to content

Commit 35a392f

Browse files
orasispanhongx
authored andcommitted
Update README.md
1 parent 40ae8b8 commit 35a392f

File tree

68 files changed

+302
-105
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

68 files changed

+302
-105
lines changed

README.md

Lines changed: 103 additions & 104 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,14 @@
1-
# Improve AI for Android
1+
# AI Decisions, Ranking, Scoring, and Multivariate Optimization for Android/Java
2+
3+
Improve AI is a machine learning platform for quickly implementing app optimization, personalization, and recommendations for [iOS](https://improve.ai/ios-sdk/), [Android](https://improve.ai/android-sdk/), and [Python](https://improve.ai/python-sdk/).
4+
5+
The SDKs provide simple APIs for AI [decisions](https://improve.ai/decisions/), [ranking](https://improve.ai/ranking/), [scoring](https://improve.ai/scoring/), and [multivariate optimization](https://improve.ai/multivariate-optimization/) that execute immediately, on-device, with zero network latency.
26

3-
Improve AI provides quick on-device AI decisions that get smarter over time. It's like an AI *if/then* statement. Replace guesses in your app's configuration with AI decisions to increase your app's revenue, user retention, or any other metric automatically.
47

58
## Installation
6-
### Step 1: Add JitPack in your root build.gradle at the end of repositories
9+
10+
Add JitPack in your root build.gradle at the end of repositories
11+
712
```gradle
813
allprojects {
914
repositories {
@@ -13,187 +18,181 @@ allprojects {
1318
}
1419
```
1520

16-
### Step 2: Add the dependency in your app/build.gradle file
21+
Add the dependency in your app/build.gradle file
1722
```gradle
1823
dependencies {
1924
implementation 'com.github.improve-ai:android-sdk:7.1.3'
2025
}
2126
```
2227

2328
## Initialization
24-
### Step 1: Add default track url to your AndroidManifest.xml file
29+
30+
Add default track url to your AndroidManifest.xml file:
31+
2532
```
26-
// The default track url is obtained from your Improve AI Gym configuration.
33+
// The track url is obtained from your Improve AI Gym configuration.
2734
<application>
2835
<meta-data
2936
android:name="ai.improve.DEFAULT_TRACK_URL"
30-
android:value="YOUR TRACK URL" />
37+
android:value="https://xxxx.lambda-url.us-east-1.on.aws/" />
3138
</application>
3239
```
3340

34-
### Step 2: Load the model
35-
```
41+
Load the model:
42+
43+
```Java
3644
public class SampleApplication extends Application {
3745
@Override
3846
public void onCreate() {
3947
super.onCreate();
40-
// greetingsModelUrl is a trained model output by the Improve AI Gym
41-
DecisionModel.get("greetings").loadAsync(greetingsModelUrl);
48+
49+
// The model url is obtained from your Improve AI Gym configuration
50+
String modelURL = "https://xxxx.s3.amazonaws.com/models/latest/greetings.xgb.gz";
51+
52+
DecisionModel.get("greetings").loadAsync(modelUrl);
4253
}
4354
}
4455
```
4556

4657
## Usage
4758

48-
Improve AI makes quick on-device AI decisions that get smarter over time.
59+
The heart of Improve AI is the *which()* statement. *which()* is like an AI *if/then* statement.
4960

50-
The heart of Improve AI is the *which* statement. *which* is like an AI if/then statement.
5161
```Java
5262
greeting = DecisionModel.get("greetings").which("Hello", "Howdy", "Hola");
5363
```
5464

55-
*which* makes decisions on-device using a *decision model*. Decision models are easily trained by assigning rewards for positive outcomes.
65+
*which()* takes a list of *variants* and returns the best - the "best" being the variant that provides the highest expected reward given the current conditions.
66+
67+
Decision models are easily trained with [reinforcement learning](https://improve.ai/reinforcement-learning/):
5668

5769
```Java
5870
if (success) {
5971
DecisionModel.get("greetings").addReward(1.0);
6072
}
6173
```
6274

63-
Rewards are credited to the most recent decision made by the model. *which* will make the decision that provides the highest expected reward. When the rewards are business metrics, such as revenue or user retention, the decisions will optimize to automatically improve those metrics over time.
64-
65-
*That's like A/B testing on steroids.*
66-
67-
### Numbers Too
75+
With reinforcement learning, positive rewards are assigned for positive outcomes (a "carrot") and negative rewards are assigned for undesirable outcomes (a "stick").
6876

69-
What discount should we offer?
77+
*which()* automatically tracks it's decision with the [Improve AI Gym](https://github.com/improve-ai/gym/). Rewards are credited to the most recent tracked decision for each model, including from a previous app session.
7078

71-
```Java
72-
discount = decisionModel.which(0.1, 0.2, 0.3);
73-
```
79+
## Contextual Decisions
7480

75-
### Booleans
81+
Unlike A/B testing or feature flags, Improve AI uses *context* to make the best decision for each user. On Android, the following context is automatically included:
7682

77-
Dynamically enable feature flags for best performance...
83+
- *$country* - two letter country code
84+
- *$lang* - two letter language code
85+
- *$tz* - numeric GMT offset
86+
- *$carrier* - cellular network
87+
- *$device* - string portion of device model
88+
- *$devicev* - device version
89+
- *$os* - string portion of OS name
90+
- *$osv* - OS version
91+
- *$pixels* - screen width x screen height
92+
- *$app* - app name
93+
- *$appv* - app version
94+
- *$sdkv* - Improve AI SDK version
95+
- *$weekday* - (ISO 8601, monday==1.0, sunday==7.0) plus fractional part of day
96+
- *$time* - fractional day since midnight
97+
- *$runtime* - fractional days since session start
98+
- *$day* - fractional days since born
99+
- *$d* - the number of decisions for this model
100+
- *$r* - total rewards for this model
101+
- *$r/d* - total rewards/decisions
102+
- *$d/day* - decisions/$day
78103

79-
```Java
80-
featureFlag = decisionModel.given(deviceAttributes).which(true, false)
81-
```
104+
Using the context, on a Spanish speaker's device we expect our *greetings* model to learn to choose *Hola*.
82105

83-
### Complex Objects
106+
Custom context can also be provided via *given()*:
84107

85108
```Java
86-
themeVariants = Arrays.asList(
87-
Map.of("textColor", "#000000", "backgroundColor", "#ffffff"),
88-
Map.of("textColor", "#F0F0F0", "backgroundColor", "#aaaaaa"));
89-
theme = themeModel.which(themeVariants);
109+
greeting = greetingsModel.given(Map.of("language", "cowboy")).which("Hello", "Howdy", "Hola");
90110
```
91111

92-
When a single Array argument is passed to which, it is treated as a list of variants.
112+
Given the language is *cowboy*, the variant with the highest expected reward should be *Howdy* and the model would learn to make that choice.
93113

94-
Improve learns to use the attributes of each key and value in a complex variant to make the optimal decision.
114+
## Ranking
95115

96-
Variants can be any JSON encodeable data structure of arbitrary complexity, including nested dictionaries, arrays, strings, numbers, nulls, and booleans.
116+
[Ranking](https://improve.ai/ranking/) is a fundamental task in recommender systems, search engines, and social media feeds. Fast ranking can be performed on-device in a single line of code:
97117

98-
## Decisions are Contextual
118+
```Java
119+
rankedWines = sommelierModel.given(entree).rank(wines);
120+
```
99121

100-
Unlike A/B testing or feature flags, Improve AI uses *context* to make the best decision for each user. On Android, the following context is automatically included:
122+
**Note**: Decisions are not tracked when calling *rank()*. *which()* or *decide()* must be used to train models for ranking.
101123

102-
- $country - two letter country code
103-
- $lang - two letter language code
104-
- $tz - numeric GMT offset
105-
- $carrier - cellular network
106-
- $device - string portion of device model
107-
- $devicev - device version
108-
- $os - string portion of OS name
109-
- $osv - OS version
110-
- $pixels - screen width x screen height
111-
- $app - app name
112-
- $appv - app version
113-
- $sdkv - Improve AI SDK version
114-
- $weekday - (ISO 8601, monday==1.0, sunday==7.0) plus fractional part of day
115-
- $time - fractional day since midnight
116-
- $runtime - fractional days since session start
117-
- $day - fractional days since born
118-
- $d - the number of decisions for this model
119-
- $r - total rewards for this model
120-
- $r/d - total rewards/decisions
121-
- $d/day - decisions/$day
124+
## Scoring
122125

123-
Using the context, on a Spanish speaker's device we expect our *greetings* model to learn to choose *Hola*.
126+
[Scoring](https://improve.ai/scoring/) makes it easy to turn any database table into a recommendation engine.
124127

125-
Custom context can also be provided via *given()*:
128+
Simply add a *score* column to the database and update the score for each row.
126129

127130
```Java
128-
greeting = greetingsModel.given(Map.of("language", "cowboy")).which("Hello", "Howdy", "Hola");
131+
scores = conversionRateModel.score(rows);
129132
```
130133

131-
Given the language is *cowboy*, the variant with the highest expected reward should be *Howdy* and the model would learn to make that choice.
134+
At query time, sort the query results descending by the *score* column and the first results will be the top recommendations. This works particularly well with local databases on mobile devices where the scores can be personalized to each individual user.
132135

133-
## Decision Models
136+
*score()* is also useful for crafting custom optimization algorithms or providing supplemental metrics in a multi-stage recommendation system.
134137

135-
## Example: Optimizing an Upsell Offer
138+
**Note**: Decisions are not tracked when calling *score()*. *which()*, *decide()*, or *optimize()* must be used to train models for scoring.
136139

137-
Improve AI is powerful and flexible. Variants can be any JSON encodeable data structure including **strings**, **numbers**, **booleans**, **lists**, and **maps**.
140+
## Multivariate Optimization
138141

139-
For a dungeon crawler game, say the user was purchasing an item using an In App Purchase. We can use Improve AI to choose an additional product to display as an upsell offer during checkout. With a few lines of code, we can train a model that will learn to optimize the upsell offer given the original product being purchased.
142+
[Multivariate optimization](https://improve.ai/multivariate-optimization/) is the joint optimization of multiple variables simultaneously. This is often useful for app configuration and performance tuning.
140143

141144
```Java
142-
product = Map.of("name", "red sword", "price", 4.99);
143-
144-
upsell = upsellModel.given(product).which(
145-
Map.of("name", "gold", "quantity", 100, "price", 1.99),
146-
Map.of("name", "diamonds", "quantity", 10, "price", 2.99),
147-
Map.of("name", "red scabbard", "price", 0.99);
145+
config = configModel.optimize(Map.of(
146+
"bufferSize", [1024, 2048, 4096, 8192],
147+
"videoBitrate", [256000, 384000, 512000]);
148148
```
149-
The product to be purchased is the **red sword**. Notice that the variants are maps with a mix of string and numeric values.
150149

151-
The rewards in this case might be any additional revenue from the upsell.
150+
This example decides multiple variables simultaneously. Notice that instead of a single list of variants, a mapping of keys to arrays of variants is provided. This multi-variate mode jointly optimizes all variables for the highest expected reward.
152151

153-
```Java
154-
if (upsellPurchased) {
155-
upsellModel.addReward(upsell.price);
156-
}
157-
```
152+
*optimize()* automatically tracks it's decision with the [Improve AI Gym](https://github.com/improve-ai/gym/). Rewards are credited to the most recent decision made by the model, including from a previous app session.
158153
159-
While it is reasonable to hypothesize that the **red scabbord** might be the best upsell offer to pair with the **red sword**, it is still a guess. Any time a guess is made on the value of a variable, instead use Improve AI to decide.
154+
## Variant Types
160155
161-
*Replace guesses with AI decisions.*
156+
Variants and givens can be any JSON encodable object. This includes *Integer*, *Double*, *Boolean*, *String*, *Map*, *List*, and *null*. Nested values within collections are automatically encoded as machine learning features to assist in the decision making process.
162157
163-
## Example: Performance Tuning
158+
The following are all valid:
164159
165-
In the 2000s I was writing a lot of video streaming code. The initial motivation for Improve AI came out of my frustrations with attempting to tune video streaming clients across heterogenious networks.
160+
```Java
161+
greeting = greetingsModel.which("Hello", "Howdy", "Hola")
166162
167-
I was forced to make guesses on performance sensitive configuration defaults through slow trial and error. My client configuration code maybe looked something like this:
163+
discount = discountModel.which(0.1, 0.2, 0.3)
168164
169-
```Java
170-
config = Map.of("bufferSize", 2048, "videoBitrate", 384000);
171-
```
165+
enabled = featureFlagModel.which(true, false)
172166
173-
This is the code I wish I could have written:
167+
item = filterModel.which(item, nil)
174168
175-
```Java
176-
config = configModel.which(Map.of(
177-
"bufferSize", [1024, 2048, 4096, 8192],
178-
"videoBitrate", [256000, 384000, 512000]);
179-
```
180-
This example decides multiple variables simultaneously. Notice that instead of a single list of variants, a dictionary mapping keys to lists of variants is provided to *which*. This multi-variate mode jointly optimizes both variables for the highest expected reward.
169+
themes = Arrays.asList(
170+
Map.of("font", "Helvetica", "size", 12, "color", "#000000"),
171+
("font", "Comic Sans", "size": 16, "color", "#F0F0F0"));
181172
182-
The rewards in this case might be negative to penalize any stalls during video playback.
183-
```Java
184-
if (videoStalled) {
185-
configModel.addReward(-0.001);
186-
}
173+
theme = themeModel.which(themes)
187174
```
188175
189-
Improve AI frees us from having to overthink our configuration values during development. We simply give it some reasonable variants and let it learn from real world usage.
176+
## Privacy
177+
178+
It is strongly recommended to never include Personally Identifiable Information (PII) in variants or givens so that it is never tracked, persisted, or used as training data.
190179
191-
Look for places where you're relying on guesses or an executive decision and consider instead directly optimizing for the outcomes you desire.
180+
## Resources
192181
193-
## Privacy
182+
- [Quick Start Guide](https://improve.ai/quick-start/)
183+
- [iOS SDK API Docs](https://improve.ai/ios-sdk/)
184+
- [Improve AI Gym](https://github.com/improve-ai/gym/)
185+
- [Improve AI Trainer (FREE)](https://aws.amazon.com/marketplace/pp/prodview-pyqrpf5j6xv6g)
186+
- [Improve AI Trainer (PRO)](https://aws.amazon.com/marketplace/pp/prodview-adchtrf2zyvow)
187+
- [Reinforcement Learning](https://improve.ai/reinforcement-learning/)
188+
- [Decisions](https://improve.ai/multivariate-optimization/)
189+
- [Ranking](https://improve.ai/ranking/)
190+
- [Scoring](https://improve.ai/scoring/)
191+
- [Multivariate optimization](https://improve.ai/multivariate-optimization/)
194192
195-
It is strongly recommended to never include Personally Identifiable Information (PII) in variants or givens so that it is never tracked, persisted, or used as training data.
196193
197194
## Help Improve Our World
198195
199-
The mission of Improve AI is to make our corner of the world a little bit better each day. When each of us improve our corner of the world, the whole world becomes better. If your product or work does not make the world better, do not use Improve AI. Otherwise, welcome, I hope you find value in my labor of love. - Justin Chapweske
196+
The mission of Improve AI is to make our corner of the world a little bit better each day. When each of us improve our corner of the world, the whole world becomes better. If your product or work does not make the world better, do not use Improve AI. Otherwise, welcome, I hope you find value in my labor of love.
197+
198+
-- Justin Chapweske

0 commit comments

Comments
 (0)