Skip to content

Commit b048264

Browse files
maciejmakowski2003Maciej Makowski
andauthored
Feat/docs/audio visualizer tutorial (#275)
* feat: added AnalyserNode to docs and trying to provide AudioVisaulizer example * feat: added docs for AudioNode * feat: added AudioNode to docs * refactor: refactor AudioNode docs layout * fix: removed audio channels header * feat: added AnalyserNode to docs * refactor: refactored bolds and methods signatures * fix: fixed bolds * fix: fixed naming in methods * feat: finished AnalyserNode docs * fix: fixed example time-domian data size based on docs * feat: added core * feat: added preparing canvas chapter * refactor: refactored see your sound example * feat: finished Create an analyzer chapter * fix: fixed audio api for web * feat: finished audio visualizer tutorial * refactor: docs uses lib from .tgz * fix: reverted .tgz to .gitignore * refactor: pr's requested changes * chore: trying to fix linguist settings --------- Co-authored-by: Maciej Makowski <maciej.makowski2608@gmail.com>
1 parent 3244d6d commit b048264

File tree

14 files changed

+1306
-49
lines changed

14 files changed

+1306
-49
lines changed

.gitattributes

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
*.bat text eol=crlf
44
docs/assets/example-01.mp4 filter=lfs diff=lfs merge=lfs -text
55

6-
packages/react-native-audio-api/common/cpp/libs/* linguist-vendored
7-
packages/react-native-audio-api/android/libs/* linguist-vendored
8-
apps/* linguist-vendored
9-
docs/* linguist-vendored
10-
.github/* linguist-vendored
11-
.yarn/* linguist-vendored
12-
packages/audiodocs/* linguist-vendored
6+
packages/react-native-audio-api/common/cpp/libs/** linguist-vendored
7+
packages/react-native-audio-api/android/libs/** linguist-vendored
8+
apps/** linguist-vendored
9+
docs/** linguist-vendored
10+
.github/** linguist-vendored
11+
.yarn/** linguist-vendored
12+
packages/audiodocs/** linguist-vendored

apps/common-app/src/examples/AudioVisualizer/AudioVisualizer.tsx

Lines changed: 7 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ import {
66
AudioBuffer,
77
AudioBufferSourceNode,
88
} from 'react-native-audio-api';
9-
import { ActivityIndicator, View, StyleSheet } from 'react-native';
9+
import { ActivityIndicator, View } from 'react-native';
1010

1111
import FreqTimeChart from './FreqTimeChart';
1212
import { Container, Button } from '../../components';
@@ -120,7 +120,12 @@ const AudioVisualizer: React.FC = () => {
120120
<View
121121
style={{ flex: 0.5, justifyContent: 'center', alignItems: 'center' }}>
122122
{isLoading && <ActivityIndicator color="#FFFFFF" />}
123-
<View style={styles.button}>
123+
<View
124+
style={{
125+
justifyContent: 'center',
126+
flexDirection: 'row',
127+
marginTop: layout.spacing * 2,
128+
}}>
124129
<Button
125130
onPress={handlePlayPause}
126131
title={isPlaying ? 'Pause' : 'Play'}
@@ -132,17 +137,4 @@ const AudioVisualizer: React.FC = () => {
132137
);
133138
};
134139

135-
const styles = StyleSheet.create({
136-
container: {
137-
flex: 1,
138-
justifyContent: 'center',
139-
alignItems: 'center',
140-
},
141-
button: {
142-
justifyContent: 'center',
143-
flexDirection: 'row',
144-
marginTop: layout.spacing * 2,
145-
},
146-
});
147-
148140
export default AudioVisualizer;

packages/audiodocs/docs/guides/making-a-piano-keyboard.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -388,4 +388,4 @@ In this guide, we have learned how to create a simple piano keyboard with the he
388388

389389
## What's next?
390390

391-
I’m not sure, but give yourself a pat on the back – you’ve earned it! More guides are on the way, so stay tuned! 🎼
391+
In [the next section](/guides/noise-generation), we will learn how we can generate noise using the audio buffer source node.

packages/audiodocs/docs/guides/noise_generation.mdx renamed to packages/audiodocs/docs/guides/noise-generation.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,4 +117,4 @@ import BrownianNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/Br
117117

118118
## What's next?
119119

120-
[fill after merge :)]
120+
In [the next section](/guides/see-your-sound), we will explore how to capture audio data, visualize this data effectively, and utilize it to create basic animations.
Lines changed: 269 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,269 @@
1+
---
2+
sidebar_position: 5
3+
---
4+
5+
import InteractiveExample from '@site/src/components/InteractiveExample';
6+
7+
# See your sound
8+
9+
In this section, we will get familiar with capabilities of the [`AnalyserNode`](/visualization/analyser-node) interface,
10+
focusing on how to extract audio data in order to create a simple real-time visualization of the sounds.
11+
12+
## Base application
13+
14+
To kick-start things a bit, lets use code based on previous tutorials.
15+
It is a simple application that can load and play a sound from file.
16+
As previously if you would like to code along the tutorial, copy and paste the code provided below into your project.
17+
18+
```tsx
19+
import React, {
20+
useState,
21+
useEffect,
22+
useRef,
23+
useMemo,
24+
} from 'react';
25+
import * as FileSystem from 'expo-file-system';
26+
import {
27+
AudioContext,
28+
AudioBuffer,
29+
AudioBufferSourceNode,
30+
} from 'react-native-audio-api';
31+
import { ActivityIndicator, View, Button, LayoutChangeEvent } from 'react-native';
32+
33+
const AudioVisualizer: React.FC = () => {
34+
const [isPlaying, setIsPlaying] = useState(false);
35+
const [isLoading, setIsLoading] = useState(false);
36+
37+
const audioContextRef = useRef<AudioContext | null>(null);
38+
const bufferSourceRef = useRef<AudioBufferSourceNode | null>(null);
39+
const audioBufferRef = useRef<AudioBuffer | null>(null);
40+
41+
const handlePlayPause = () => {
42+
if (isPlaying) {
43+
bufferSourceRef.current?.stop();
44+
} else {
45+
if (!audioContextRef.current) {
46+
return
47+
}
48+
49+
bufferSourceRef.current = audioContextRef.current.createBufferSource();
50+
bufferSourceRef.current.buffer = audioBufferRef.current;
51+
bufferSourceRef.current.connect(audioContextRef.current.destination);
52+
53+
bufferSourceRef.current.start();
54+
}
55+
56+
setIsPlaying((prev) => !prev);
57+
};
58+
59+
useEffect(() => {
60+
if (!audioContextRef.current) {
61+
audioContextRef.current = new AudioContext();
62+
}
63+
64+
const fetchBuffer = async () => {
65+
setIsLoading(true);
66+
audioBufferRef.current = await FileSystem.downloadAsync(
67+
'https://software-mansion-labs.github.io/react-native-audio-api/audio/music/example-music-02.mp3',
68+
FileSystem.documentDirectory + 'audio.mp3'
69+
).then(({ uri }) => {
70+
return audioContextRef.current!.decodeAudioDataSource(uri);
71+
});
72+
73+
setIsLoading(false);
74+
};
75+
76+
fetchBuffer();
77+
78+
return () => {
79+
audioContextRef.current?.close();
80+
};
81+
}, []);
82+
83+
return (
84+
<View style={{ flex: 1}}>
85+
<View style={{ flex: 0.2 }} />
86+
<View
87+
style={{ flex: 0.5, justifyContent: 'center', alignItems: 'center' }}>
88+
{isLoading && <ActivityIndicator color="#FFFFFF" />}
89+
<View
90+
style={{
91+
justifyContent: 'center',
92+
flexDirection: 'row',
93+
marginTop: 16,
94+
}}>
95+
<Button
96+
onPress={handlePlayPause}
97+
title={isPlaying ? 'Pause' : 'Play'}
98+
disabled={!audioBufferRef.current}
99+
color={'#38acdd'}
100+
/>
101+
</View>
102+
</View>
103+
</View>
104+
);
105+
};
106+
107+
export default AudioVisualizer;
108+
```
109+
110+
## Create an analyzer to capture and process audio data
111+
112+
To obtain frequency and time-domain data, we need to utilize the [`AnalyserNode`](/visualization/analyser-node).
113+
It is an [`AudioNode`](/core/audio-node) that passes data unchanged from input to output while enabling the extraction of this data in two domains: time and frequency.
114+
115+
We will use two specific `AnalyserNode's` methods:
116+
- [`getByteTimeDomainData`](/visualization/analyser-node#getbytetimedomaindata)
117+
- [`getByteFrequencyData`](/visualization/analyser-node#getbytefrequencydata)
118+
119+
These methods will allow us to acquire the necessary data for our analysis.
120+
121+
```jsx {7,12,17-18,23,29,35,39,45-62,69-75}
122+
/* ... */
123+
124+
import {
125+
AudioContext,
126+
AudioBuffer,
127+
AudioBufferSourceNode,
128+
AnalyserNode,
129+
} from 'react-native-audio-api';
130+
131+
/* ... */
132+
133+
const FFT_SIZE = 512;
134+
135+
const AudioVisualizer: React.FC = () => {
136+
const [isPlaying, setIsPlaying] = useState(false);
137+
const [isLoading, setIsLoading] = useState(false);
138+
const [times, setTimes] = useState<number[]>(new Array(FFT_SIZE).fill(127));
139+
const [freqs, setFreqs] = useState<number[]>(new Array(FFT_SIZE / 2).fill(0));
140+
141+
const audioContextRef = useRef<AudioContext | null>(null);
142+
const bufferSourceRef = useRef<AudioBufferSourceNode | null>(null);
143+
const audioBufferRef = useRef<AudioBuffer | null>(null);
144+
const analyserRef = useRef<AnalyserNode | null>(null);
145+
146+
const handlePlayPause = () => {
147+
if (isPlaying) {
148+
bufferSourceRef.current?.stop();
149+
} else {
150+
if (!audioContextRef.current || !analyserRef.current) {
151+
return
152+
}
153+
154+
bufferSourceRef.current = audioContextRef.current.createBufferSource();
155+
bufferSourceRef.current.buffer = audioBufferRef.current;
156+
bufferSourceRef.current.connect(analyserRef.current);
157+
158+
bufferSourceRef.current.start();
159+
160+
requestAnimationFrame(draw);
161+
}
162+
163+
setIsPlaying((prev) => !prev);
164+
};
165+
166+
const draw = () => {
167+
if (!analyserRef.current) {
168+
return;
169+
}
170+
171+
const timesArrayLength = analyserRef.current.fftSize;
172+
const frequencyArrayLength = analyserRef.current.frequencyBinCount;
173+
174+
const timesArray = new Array(timesArrayLength);
175+
analyserRef.current.getByteTimeDomainData(timesArray);
176+
setTimes(timesArray);
177+
178+
const freqsArray = new Array(frequencyArrayLength);
179+
analyserRef.current.getByteFrequencyData(freqsArray);
180+
setFreqs(freqsArray);
181+
182+
requestAnimationFrame(draw);
183+
};
184+
185+
useEffect(() => {
186+
if (!audioContextRef.current) {
187+
audioContextRef.current = new AudioContext();
188+
}
189+
190+
if (!analyserRef.current) {
191+
analyserRef.current = audioContextRef.current.createAnalyser();
192+
analyserRef.current.fftSize = FFT_SIZE;
193+
analyserRef.current.smoothingTimeConstant = 0.8;
194+
195+
analyserRef.current.connect(audioContextRef.current.destination);
196+
}
197+
198+
const fetchBuffer = async () => {
199+
setIsLoading(true);
200+
audioBufferRef.current = await FileSystem.downloadAsync(
201+
'https://software-mansion-labs.github.io/react-native-audio-api/audio/music/example-music-02.mp3',
202+
FileSystem.documentDirectory + 'audio.mp3'
203+
).then(({ uri }) => {
204+
return audioContextRef.current!.decodeAudioDataSource(uri);
205+
});
206+
207+
setIsLoading(false);
208+
};
209+
210+
fetchBuffer();
211+
212+
return () => {
213+
audioContextRef.current?.close();
214+
};
215+
}, []);
216+
217+
return (
218+
<View style={{ flex: 1}}>
219+
<View style={{ flex: 0.2 }} />
220+
<View
221+
style={{ flex: 0.5, justifyContent: 'center', alignItems: 'center' }}>
222+
{isLoading && <ActivityIndicator color="#FFFFFF" />}
223+
<View
224+
style={{
225+
justifyContent: 'center',
226+
flexDirection: 'row',
227+
marginTop: 16,
228+
}}>
229+
<Button
230+
onPress={handlePlayPause}
231+
title={isPlaying ? 'Pause' : 'Play'}
232+
disabled={!audioBufferRef.current}
233+
color={'#38acdd'}
234+
/>
235+
</View>
236+
</View>
237+
</View>
238+
);
239+
};
240+
241+
export default AudioVisualizer;
242+
```
243+
244+
We utilize the [`requestAnimationFrame`](https://reactnative.dev/docs/timers) method to continuously fetch and update real-time audio visualization data.
245+
246+
## Visualize time-domain and frequency data
247+
248+
To render both the time as well as frequency domain visualizations, we will use our beloved graphic library - <br/> [`react-native-skia`](https://shopify.github.io/react-native-skia/).
249+
250+
If you would like to know more what are time and frequency domains, have at look at [Time domain vs Frequency domain](/visualization/analyser-node#time-domain-vs-frequency-domain) section of the AnalyserNode documentation,
251+
which explains those terms in details, but otherwise here is the code:
252+
253+
**Time domain**
254+
255+
import TimeDomain from '@site/src/examples/SeeYourSound/TimeDomainComponent';
256+
import TimeDomainSrc from '!!raw-loader!@site/src/examples/SeeYourSound/TimeDomainSource';
257+
258+
<InteractiveExample component={TimeDomain} src={TimeDomainSrc} />
259+
260+
import FrequencyDomain from '@site/src/examples/SeeYourSound/FrequencyDomainComponent';
261+
import FrequencyDomainSrc from '!!raw-loader!@site/src/examples/SeeYourSound/FrequencyDomainSource';
262+
263+
**Frequency domain**
264+
265+
<InteractiveExample component={FrequencyDomain} src={FrequencyDomainSrc} />
266+
267+
## What's next?
268+
269+
I’m not sure, but give yourself a pat on the back – you’ve earned it! More guides are on the way, so stay tuned! 🎼

packages/audiodocs/package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
"@mdx-js/react": "^1.6.22",
3131
"@mui/material": "^5.12.0",
3232
"@swmansion/t-rex-ui": "^0.0.13",
33+
"@shopify/react-native-skia": "1.10.2",
3334
"@vercel/og": "^0.6.2",
3435
"babel-polyfill": "^6.26.0",
3536
"babel-preset-expo": "^9.2.2",
@@ -42,7 +43,7 @@
4243
"react-dom": "^17.0.2",
4344
"react-draggable": "^4.4.5",
4445
"react-native": "^0.71.4",
45-
"react-native-audio-api": "^0.3.1",
46+
"react-native-audio-api": "./react-native-audio-api-0.4.6-beta.tgz",
4647
"react-native-gesture-handler": "^2.16.0",
4748
"react-native-reanimated": "^3.8.1",
4849
"react-native-web": "^0.18.12",

0 commit comments

Comments
 (0)