Skip to content

Conversation

@or-tensorleap
Copy link

Adding nn.Fold like module

doronHarnoy and others added 30 commits May 24, 2022 13:25
* convert to poetry

* add make file

* remove unused files

* update license

* throw warning if lambda layers exist

* add custom onnx layer and hard sigmoid in it

* connect hard sigmoid layer to converter

* test hard sigmoid

* add get config for onnx custom layer

* send custom onnx objects map when loading from config

* add keras-data-format-converter

* add all interpolation modes to upsample

* add all deeplab version tests

* add convert_channels_first_to_last to testing
* raise on lamda layer exists

* remove change_ordering from test and use our data format first to last with transform_inputs
* add OnnxReduceMean layer

* replace lambda in new reduce mean layer

* add get_special_data_format_handler for for reduce_mean_layer

* update keras-data-format-converter
* move data format converter to dev dependecy

* 0.0.34
* add special layer for erf operation

* connect onnx operator to new tf layer
* convert all elemnt wise layers to use keras layer and fal back to operation instead of lambda

* add sqrt operation layer

* use keras merge layers only for two tensors

* use oplambda for power
* multiple axes slicing suport

* upgrade versions
RanHomri and others added 25 commits November 10, 2024 15:04
* handling constants with a custom layer (mul)

* modify mul to use lambda rather than custom layer

* fixed eager tensor and dtype errors

* added kwargs

* gitignore + convert eager tensor to numpy

* deleted custom layer

* git ignore

* added dtype conversion

* elementwise sub with proper const handling

* elementwise add const handling

* up version

* docstrings

* padding

* with padding fix

* remove duplicated code

* bugfix

---------

Co-authored-by: ranhomri <ran.homri@tensorleap.ai>
* correct channels calculation when groups appear

* update package version
* .

* treat bollean in cons of shape

* add cleaned name and elemntwise layers

* up version

* remove try catch

* up version

---------

Co-authored-by: ranhomri <ran.homri@tensorleap.ai>
Co-authored-by: tensorleap <yotam@tensorleap.ai>
* .

* added global max pool layer

* pixellot o2k version

* pixellot o2k version

---------

Co-authored-by: tom Koren <tom.koren@tensorleap.ai>
Co-authored-by: ranhomri <ran.homri@tensorleap.ai>
* .

* .

* .

* .

* .

* update cache key
* change cache key

* add dev dependencies to install

* lock

* debug ci

* fix pooling for None shaped layers

* fix tests

* support mac dev dependencies

* install poetry in test

* try to setup python

* fix env caching and up keras

* remove dev
* adding a private test for yolo

* imports removed
* fixes for yolov11 test

* fixed test
* first version of working llama test with nference

* refactored the code and removed unnecessary parts

* removed unnecessary parts

* fixing versions

* fixing versions

* FIXED THE CONDITION

* added optimum

* ignored test

* ignored test
* fix reduce min for no axes cases

* # This is a combination of 2 commits.
# This is the 1st commit message:

up o2k version

# The commit message #2 will be skipped:

# fixup! up o2k version

* remove optimum temp

* remove comment

* up o2k version

* change ubuntu

* add torch version print

* fix dinov2 test
* update to dev version  and update convert_slice function

* update to not dev version
* add mnist onnx test

* fix conv layer to consider SAME LOWER AND UPPER params

* update version
Co-authored-by: Idan <idan@idan.com>
Co-authored-by: Idan <idan@idan.com>
* .

* .

* >

* >

* >

---------

Co-authored-by: Idan <idan@idan.com>
@or-tensorleap or-tensorleap marked this pull request as draft November 19, 2025 13:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants