You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[DaggerFlux.jl](https://github.com/FluxML/DaggerFlux.jl) for model-based parallelization of large Neural Networks (see [JuliaCon 2022 for exposition of the package](https://www.youtube.com/watch?v=176c4S6LqT8))
17
+
-[FluxDistributed.jl](https://github.com/DhairyaLGandhi/FluxDistributed.jl) for data-based parallelization of large Neural Networks (see [JuliaCon 2022 for exposition of the package](https://www.youtube.com/watch?v=176c4S6LqT8))
18
+
19
+
## Implementing new things
20
+
21
+
### The Equation Learner And Its Symbolic Representation
13
22
14
23
In many scientific and engineering one searches for interpretable (i.e.
15
24
human-understandable) models instead of the black-box function approximators
@@ -30,44 +39,34 @@ by the tasks in the papers). Finally, you will implement functionality that
30
39
can transform the learned model into a symbolic, human readable, and exectuable
31
40
Julia expression.
32
41
33
-
## An Evolutionary Algorithm Applied To Julia's AST
34
-
35
-
Most of the approaches to equation learning have to be differentiable by default
36
-
in order to use the traditional machinery of stochastic gradient descent with
37
-
backpropagation. This often leads to equations with too many terms, requiring
38
-
special techniques for enforcing sparsity for terms with low weights.
39
-
40
-
In Julia we can however use a different learning paradigm of evolutionary
41
-
algorithms, which can work on discrete set of expressions. The goal is to
42
-
write mutation and recombination - the basic operators of a genetic algorithm,
43
-
but applied on top of Julia AST.
44
-
45
-
Data: AI Feyman [database](https://space.mit.edu/home/tegmark/aifeynman.html) on symbolic regression (from [article](https://arxiv.org/pdf/1905.11481.pdf)/[code](https://github.com/SJ001/AI-Feynman))
- Genetic Programming for Julia: fast performance and parallel island model implementation [report](http://courses.csail.mit.edu/18.337/2015/projects/MorganFrank/projectReport.pdf)
50
-
51
-
## Distributed Optimization Package
52
-
53
-
One click distributed optimization is at the heart of other machine learning
54
-
and optimization libraries such as pytorch, however some equivalents are
55
-
missing in the Julia's Flux ecosystem. The goal of this project is to explore,
56
-
implement and compare at least two state-of-the-art methods of distributed
57
-
gradient descent on data that will be provided for you.
58
-
59
-
Some of the work has already been done in this area by one of our former students,
60
-
see [link](https://dspace.cvut.cz/handle/10467/97057).
42
+
### Planning algorithms
43
+
Extend [SymbolicPlanners.jl](https://github.com/JuliaPlanners/SymbolicPlanners.jl) with the mm-ϵ variant of the bi-directional search [MM: A bidirectional search algorithm that is guaranteed to meet in the middle](https://www.sciencedirect.com/science/article/pii/S0004370217300905). This [pull request](https://github.com/JuliaPlanners/SymbolicPlanners.jl/pull/8) might be very helpful in understanding better the library.
61
44
62
-
## A Rule Learning Algorithm
45
+
### Logging profiler
46
+
In our class, we have implemented a simple version of the profiler recording beggining and end of exection of function calls. This might be very useful for observing behavior of multi-threaded application. A sketch of logging profiler is implemented [here](https://github.com/pevnak/LoggingProfiler.jl). Your goal would to try to finish it into a workable lib or identify boundaries, why it cannot be implemented.
algorithm called [`RIPPER`](http://www.cs.utsa.edu/~bylander/cs6243/cohen95ripper.pdf)
70
55
and evaluate it on a number of datasets.
56
+
*[Learning Certifiably Optimal Rule Lists for Categorical Data](https://arxiv.org/abs/1704.01701)
57
+
*[Boolean decision rules via column generation](https://proceedings.neurips.cc/paper/2018/file/743394beff4b1282ba735e5e3723ed74-Paper.pdf)
58
+
*[Learning Optimal Decision Trees with SAT](https://proceedings.neurips.cc/paper/2021/file/4e246a381baf2ce038b3b0f82c7d6fb4-Paper.pdf)
59
+
*[A SAT-based approach to learn explainable decision sets](www.t-news.cn/Floc2018/FLoC2018-pages/proceedings_paper_441.pdf)
60
+
To increase the impact of the project, consider interfacing it with [MLJ.jl](https://alan-turing-institute.github.io/MLJ.jl/dev/)
61
+
62
+
### Parallel optimization
63
+
Implement one of the following algorithms to train neural networks in parallel. Can be implemented in a separate package or consider extending [FluxDistributed.jl](https://github.com/DhairyaLGandhi/FluxDistributed.jl). Do not forget to verify that the method actually works!!!
*[Local sgd with periodic averaging: Tighter analysis and adaptive synchronization](https://proceedings.neurips.cc/paper/2019/file/c17028c9b6e0c5deaad29665d582284a-Paper.pdf)
66
+
*[Distributed optimization for deep learning with gossip exchange](Distributed optimization for deep learning with gossip exchange)
67
+
68
+
### Improving support for multi-threadding functions in NNLib
69
+
[NNlib.jl](https://github.com/FluxML/NNlib.jl) is a workhorse library for deep learning in Julia (it powers Flux.jl). Yet most of their functions are single-threaded. The task is to choose few of them (e.g. `logitcrossentropy` or application of non-linearity) and make them multi-threaded. Ideally, you should make a workable pull request that will be accepted by the community. **Warning: this will require interaction with the Flux community**
0 commit comments