Skip to content

Commit f3298e4

Browse files
amotlhammerhead
andcommitted
Scale: Fix formatting and wording on new content
Co-authored-by: Niklas Schmidtmer <hammerhead@users.noreply.github.com>
1 parent 3b0771b commit f3298e4

File tree

3 files changed

+33
-33
lines changed

3 files changed

+33
-33
lines changed

docs/admin/clustering/scale/auto.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -153,21 +153,21 @@ In this example, I created a 3-node CrateDB Cloud cluster and created this table
153153

154154
``` sql
155155
CREATE TABLE ta (
156-
"keyword" TEXT INDEX using fulltext
157-
,"ts" TIMESTAMP
158-
,"day" TIMESTAMP GENERATED ALWAYS AS date_trunc('day', ts)
159-
)
156+
"keyword" TEXT INDEX using fulltext,
157+
"ts" TIMESTAMP,
158+
"day" TIMESTAMP GENERATED ALWAYS AS date_trunc('day', ts)
159+
)
160160
CLUSTERED INTO 24 SHARDS;
161161
```
162162

163163
This will create 8 primary shards per node plus 8 replicas. This can be checked by looking at the number of shards. This can be done for example using the console by running this:
164164

165165
```sql
166-
select node ['name'], count(*)
167-
from sys.shards
168-
group by node ['name']
169-
order by 1
170-
limit 100;
166+
SELECT node ['name'], count(*)
167+
FROM sys.shards
168+
GROUP BY node ['name']
169+
ORDER BY 1
170+
LIMIT 100;
171171
```
172172

173173
In this example, the amount of shards is 16 per node.
@@ -185,10 +185,10 @@ To trigger the scale-out you can add a table. For example:
185185

186186
```sql
187187
CREATE TABLE tb (
188-
"keyword" TEXT INDEX using fulltext
189-
,"ts" TIMESTAMP
190-
,"day" TIMESTAMP GENERATED ALWAYS AS date_trunc('day', ts)
191-
)
188+
"keyword" TEXT INDEX using fulltext,
189+
"ts" TIMESTAMP,
190+
"day" TIMESTAMP GENERATED ALWAYS AS date_trunc('day', ts)
191+
)
192192
CLUSTERED INTO 18 SHARDS;
193193
```
194194

docs/admin/clustering/scale/demand.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -31,12 +31,12 @@ CREATE TABLE test (
3131
ts TIMESTAMP,
3232
recorddetails TEXT,
3333
"day" GENERATED ALWAYS AS date_trunc('day',ts)
34-
)
34+
)
3535
PARTITIONED BY ("day")
36-
CLUSTERED INTO 4 SHARDS
37-
WITH (number_of_replicas=1);
36+
CLUSTERED INTO 4 SHARDS
37+
WITH (number_of_replicas = 1);
3838

39-
INSERT INTO test (ts) VALUES ('2022-11-18'),('2022-11-19');
39+
INSERT INTO test (ts) VALUES ('2022-11-18'), ('2022-11-19');
4040
```
4141

4242
The shards will initially look like this:
@@ -109,13 +109,13 @@ Let’s now simulate the arrival of data during the event:
109109

110110
```sql
111111
INSERT INTO test (ts) VALUES
112-
('2022-11-20'),('2022-11-21'),('2022-11-22'),('2022-11-23'),
113-
('2022-11-24'),('2022-11-25'),('2022-11-26'),('2022-11-27'),
114-
('2022-11-28'),('2022-11-29'),('2022-11-30'),('2022-12-01'),
115-
('2022-12-02'),('2022-12-03'),('2022-12-04'),('2022-12-05'),
116-
('2022-12-06'),('2022-12-07'),('2022-12-08'),('2022-12-09'),
117-
('2022-12-10'),('2022-12-11'),('2022-12-12'),('2022-12-13'),
118-
('2022-12-14'),('2022-12-15'),('2022-12-16'),('2022-12-17'),
112+
('2022-11-20'), ('2022-11-21'), ('2022-11-22'), ('2022-11-23'),
113+
('2022-11-24'), ('2022-11-25'), ('2022-11-26'), ('2022-11-27'),
114+
('2022-11-28'), ('2022-11-29'), ('2022-11-30'), ('2022-12-01'),
115+
('2022-12-02'), ('2022-12-03'), ('2022-12-04'), ('2022-12-05'),
116+
('2022-12-06'), ('2022-12-07'), ('2022-12-08'), ('2022-12-09'),
117+
('2022-12-10'), ('2022-12-11'), ('2022-12-12'), ('2022-12-13'),
118+
('2022-12-14'), ('2022-12-15'), ('2022-12-16'), ('2022-12-17'),
119119
('2022-12-18')
120120
```
121121

@@ -126,15 +126,15 @@ We can see that data from before the event stays on the baseline nodes while dat
126126
The same can be checked programmatically with this query:
127127

128128
```sql
129-
SELECT table_partitions.table_schema,
130-
table_partitions.table_name,
131-
table_partitions.values['day']::TIMESTAMP,
132-
shards.primary,
133-
shards.node['name']
129+
SELECT table_partitions.table_schema,
130+
table_partitions.table_name,
131+
table_partitions.values['day']::TIMESTAMP,
132+
shards.primary,
133+
shards.node['name']
134134
FROM sys.shards
135135
JOIN information_schema.table_partitions
136-
ON shards.partition_ident=table_partitions.partition_ident
137-
ORDER BY 1,2,3,4,5;
136+
ON shards.partition_ident = table_partitions.partition_ident
137+
ORDER BY 1, 2, 3, 4, 5;
138138
```
139139

140140
## The day the event ends
@@ -154,7 +154,7 @@ New data should now again go the baseline nodes only.
154154
Let’s confirm it:
155155

156156
```sql
157-
INSERT INTO test (ts) VALUES ('2022-12-19'),('2022-12-20')
157+
INSERT INTO test (ts) VALUES ('2022-12-19'), ('2022-12-20');
158158
```
159159

160160
![image|690x73](https://global.discourse-cdn.com/flex020/uploads/crate/original/1X/72b9f0bd28fb88402ea951f9f8a9a15c7c491ad2.png)

docs/admin/clustering/scale/expand.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ The settings that we need are:
4343
* `initial_master_nodes` set to the hostname or the `node.name` of the node
4444
* optionally we can set a `cluster.name`
4545

46-
If you are using containers you would pass these settings with lines in the `args` section of your YAML file, otherwise you could create `/etc/crate/crate.yml` before deploying the package for your distribution (refer to https://github.com/crate/crate/blob/master/app/src/main/dist/config/crate.yml for the template), or you could prevent the package installation from auto-starting the daemon by using a mechanism such as `policy-rcd-declarative`, then edit the configuration file (`crate.yml` ), and start the `crate` daemon once all settings are ready.
46+
If you are using containers you would pass these settings with lines in the `args` section of your YAML file, otherwise you could create `/etc/crate/crate.yml` before deploying the package for your distribution (refer to https://github.com/crate/crate/blob/master/app/src/main/dist/config/crate.yml for the template), or you could prevent the package installation from auto-starting the daemon by using a mechanism such as `policy-rcd-declarative`, then edit the configuration file (`crate.yml`), and start the `crate` daemon once all settings are ready.
4747

4848
## Networking considerations
4949

0 commit comments

Comments
 (0)