Skip to content

Benchmarks #25

@itsdfish

Description

@itsdfish

Hello-

I want to propose adding some performance benchmarks. I think this could be useful for testing the performance implications of design decisions and comparing DiscreteEvents to other packages. As a starting point, here is a simple model in which two functions call each other in succession. This is designed to assess the overhead of the scheduler.

using DiscreteEvents, BenchmarkTools

function chit(c)
    event!(c, fun(chat, c), after, 1.0)
end

function chat(c)
    event!(c, fun(chit, c), after, 1.0)
end

function test()
    c = Clock()
    event!(c, fun(chat, c), at, 0.0)
    run!(c, 10^5)
end

@btime test()
150.800 ms (1425057 allocations: 30.14 MiB)
Here is similar code for Simpy:

import simpy
from simpy.util import start_delayed
import timeit

def chit(env):
    yield simpy.util.start_delayed(env, chat(env), delay=1)
    #print('now=%d',  (env.now))
    #print("chit")

def chat(env):
    yield simpy.util.start_delayed(env, chit(env), delay=1)
    #print('now=%d',  (env.now))
    #print("chat")
 
    
def test():
    env = simpy.Environment()
    env.process(chat(env))
    env.run(until=100000)
    
reps = 100   
total_time = timeit.timeit(test, number=reps)
print(total_time/reps)
0.9407285390101606

Disclaimer: I am not familiar with Python and Simpy. So I cannot say this is the fairest benchmark. So it might be good to double check with someone who has more experience in Python.

It is also worth noting that this benchmark is useful for assessing the overhead a package needs to manage/schedule events. However, it becomes less relevant as more time is spent processing the events themselves.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions