You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# [Homework 9: Accelerating 1D convolution with threads](@id hw09)
2
+
3
+
## How to submit
4
+
5
+
Put all the code of inside `hw.jl`. Zip only this file (not its parent folder) and upload it to BRUTE. You should not not import anything but `Base.Threads` or just `Threads`.
6
+
7
+
::: danger Homework (2 points)
8
+
9
+
Implement *multithreaded* discrete 1D convolution operator[^1] without padding (output will be shorter). The required function signature: `thread_conv1d(x, w)`, where `x` is the signal array and `w` the kernel. For testing correctness of the implementation you can use the following example of a step function and it's derivative realized by kernel `[-1, 1]`:
[^1]: Discrete convolution with finite support [https://en.wikipedia.org/wiki/Convolution#Discrete\_convolution](https://en.wikipedia.org/wiki/Convolution#Discrete_convolution)
17
+
18
+
Your parallel implementation will be tested both in sequential and two threaded mode with the following inputs
19
+
20
+
```julia
21
+
using Random
22
+
Random.seed!(42)
23
+
x =rand(10_000_000)
24
+
w = [1.0, 2.0, 4.0, 2.0, 1.0]
25
+
@btimethread_conv1d($x, $w);
26
+
```
27
+
28
+
On your local machine you should be able to achieve `0.6x` reduction in execution time with two threads, however the automatic eval system is a noisy environment and therefore we require only `0.8x` reduction therein. This being said, please reach out to us, if you encounter any issues.
29
+
30
+
**HINTS**:
31
+
32
+
- start with single threaded implementation
33
+
- don't forget to reverse the kernel
34
+
-`@threads` macro should be all you need
35
+
- for testing purposes create a simple script, that you can run with `julia -t 1` and `julia -t 2`
0 commit comments