diff --git a/README.md b/README.md
index 6ba2727..f355864 100644
--- a/README.md
+++ b/README.md
@@ -6,6 +6,8 @@
[](https://github.com/pythonhealthdatascience/rap_template_python_des/blob/main/LICENSE)
[](https://doi.org/10.5281/zenodo.14622466)
[](https://github.com/pythonhealthdatascience/rap_template_python_des/actions/workflows/tests.yaml)
+[](https://github.com/pythonhealthdatascience/rap_template_python_des/actions/workflows/lint.yaml)
+[](https://orcid.org/0000-0002-6596-3479)
A template for creating **discrete-event simulation (DES)** models in Python within a **reproducible analytical pipeline (RAP)**.
Click on Use this template to initialise new repository.
@@ -244,7 +246,8 @@ repo/
├── lint.sh # Bash script to lint all .py and .ipynb files at once
├── pyproject.toml # Metadata for local `simulation/` package
├── README.md # This file! Describes the repository
-└── requirements.txt # Virtual environment (used by GitHub actions)
+├── requirements.txt # Virtual environment (used by GitHub actions)
+└── run_notebooks.sh # Bash script to run all .ipynb from the command line
```
@@ -323,7 +326,7 @@ This repository was developed with thanks to several others sources. These are a
| Amy Heather, Thomas Monks, Alison Harper, Navonil Mustafee, Andrew Mayne (2025) On the reproducibility of discrete-event simulation studies in health research: an empirical study using open models (https://doi.org/10.48550/arXiv.2501.13137). | `docs/heather_2025.md` |
| NHS Digital (2024) RAP repository template (https://github.com/NHSDigital/rap-package-template) (MIT Licence) | `simulation/logging.py` `docs/nhs_rap.md` |
| Sammi Rosser and Dan Chalk (2024) HSMA - the little book of DES (https://github.com/hsma-programme/hsma6_des_book) (MIT Licence) | `simulation/model.py` `notebooks/choosing_cores.ipynb` |
-| Tom Monks (2025) sim-tools: tools to support the Discrete-Event Simulation process in python (https://github.com/TomMonks/sim-tools) (MIT Licence) Who themselves cite Hoad, Robinson, & Davies (2010). Automated selection of the number of replications for a discrete-event simulation (https://www.jstor.org/stable/40926090). | `simulation/model.py` `simulation/replications.py` `notebooks/choosing_replications.ipynb` |
+| Tom Monks (2025) sim-tools: tools to support the Discrete-Event Simulation process in python (https://github.com/TomMonks/sim-tools) (MIT Licence) Who themselves cite Hoad, Robinson, & Davies (2010). Automated selection of the number of replications for a discrete-event simulation (https://www.jstor.org/stable/40926090), and Knuth. D "The Art of Computer Programming" Vol 2. 2nd ed. Page 216. | `simulation/model.py` `simulation/replications.py` `notebooks/choosing_replications.ipynb` |
| Tom Monks, Alison Harper and Amy Heather (2025) An introduction to Discrete-Event Simulation (DES) using Free and Open Source Software (https://github.com/pythonhealthdatascience/intro-open-sim/tree/main). (MIT Licence) - who themselves also cite Law. Simulation Modeling and Analysis 4th Ed. Pages 14 - 17. | `simulation/model.py` |
| Tom Monks (2024) [HPDM097 - Making a difference with health data](https://github.com/health-data-science-OR/stochastic_systems) (MIT Licence). | `notebooks/analysis.ipynb` `notebooks/choosing_replications.ipynb` `notebooks/choosing_warmup.ipynb` |
| Monks T and Harper A. Improving the usability of open health service delivery simulation models using Python and web apps (https://doi.org/10.3310/nihropenres.13467.2) [version 2; peer review: 3 approved]. NIHR Open Res 2023, 3:48. Who themselves cite a [Stack Overflow](https://stackoverflow.com/questions/59406167/plotly-how-to-filter-a-pandas-dataframe-using-a-dropdown-menu) post. | `notebooks/analysis.ipynb` |
diff --git a/docs/heather_2025.md b/docs/heather_2025.md
index cd74a26..724d5f3 100644
--- a/docs/heather_2025.md
+++ b/docs/heather_2025.md
@@ -16,7 +16,7 @@ As part of the project STARS (Sharing Tools and Artefacts for Reproducible Simul
| Control randomness | ✅ | - |
| **Outputs** |
| Include code to calculate all required model outputs (⭐) | ✅ | - |
-| Include code to generate the tables, figures, and other reported results (⭐) | N/A | No publication. |
+| Include code to generate the tables, figures, and other reported results (⭐) | ✅ | Includes some examples (in `analysis.ipynb`) where these are generated. |
## Recommendations to support troubleshooting and reuse
@@ -30,7 +30,7 @@ As part of the project STARS (Sharing Tools and Artefacts for Reproducible Simul
| Comment sufficiently | ✅ | - |
| Ensure clarity and consistency in the model results tables | ✅ | - |
| Include run instructions | ✅ | - |
-| State run times and machine specifications | ✅ | In `REAME.md` and `.ipynb` files |
+| State run times and machine specifications | ✅ | In `README.md` and `.ipynb` files. |
| **Functionality** |
| Optimise model run time | ✅ | Provides option of parallel processing. |
| Save outputs to a file | ✅ | Includes some examples (in `analysis.ipynb`) where outputs are saved. |
diff --git a/docs/hsma_changes.md b/docs/hsma_changes.md
index 650d5a1..07da77f 100644
--- a/docs/hsma_changes.md
+++ b/docs/hsma_changes.md
@@ -233,6 +233,38 @@ if self.env.now > g.warm_up_period:
)
```
+## Corrections to the time with the resource
+
+In the HSMA model, `time_with_nurse` is used for tracking resource utilisation. We include this approach, but with two corrections.
+
+**Correction #1**: Towards the end of the simulation, simply recording the sampled time with the nurse will overestimate utilisation, if this would go beyond the simulation end. In which case, we save either the time with the nurse, or the time remaining in the simulation - whichever is smallest.
+
+```
+remaining_time = (
+ self.param.warm_up_period +
+ self.param.data_collection_period) - self.env.now
+self.nurse_time_used += min(
+ patient.time_with_nurse, remaining_time)
+```
+
+**Correction #2**: If a warm-up period is included, the utilisation will be underestimated, as it won't include patients who start their consultation with the nurse in the warm-up and finish it in the data collection period. In these cases, we use an attribute `nurse_time_used_correction` to record the time that would fall in the data collection period, and add this to the `time_with_nurse`.
+
+```
+remaining_warmup = self.param.warm_up_period - self.env.now
+if remaining_warmup > 0:
+ time_exceeding_warmup = (patient.time_with_nurse -
+ remaining_warmup)
+ if time_exceeding_warmup > 0:
+ self.nurse_time_used_correction += min(
+ time_exceeding_warmup,
+ self.param.data_collection_period)
+
+...
+
+# When resetting values after warm-up end...
+self.nurse_time_used += self.nurse_time_used_correction
+```
+
## Extra features
### Prevent addition of new attributes to the parameter class
@@ -307,6 +339,14 @@ It can also be applied to other results dataframes if desired.
The `SimLogger` class will generate logs which can be saved to a file or printed to a console. This includes information on when patients arrive and are seen. This can be helpful for understanding the simulation or when debugging.
+### Selecting the length of the warm-up period
+
+The `choosing_warmup.ipynb` notebook includes a function which can be used to help choose an appropriate length for the warm-up period.
+
+### Selecting the number of replications to use
+
+The `replications.py` contains various functions and classes which can be used to help choose an appropriate number of replications to run, as explored in `choosing_replications.ipynb`.
+
### Other minor changes
There are a few smaller changes to the model with minimal impact on function. These include:
diff --git a/docs/nhs_rap.md b/docs/nhs_rap.md
index 0b05fa1..d738cb9 100644
--- a/docs/nhs_rap.md
+++ b/docs/nhs_rap.md
@@ -31,7 +31,7 @@ Meeting all of the above requirements, plus:
| Code is well-organised following [standard directory format](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/project-structure-and-packaging/). | ✅ | - |
| [Reusable functions](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/python-functions/) and/or classes are used where appropriate. | ✅ | - |
| Code adheres to agreed coding standards (e.g PEP8, [style guide for Pyspark](https://nhsdigital.github.io/rap-community-of-practice/training_resources/pyspark/pyspark-style-guide/)). | ✅ | - |
-| Pipeline includes a testing framework ([unit tests](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/unit-testing/), [back tests](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/backtesting/)). | ✅ | `tests/` contains unit and back tests |
+| Pipeline includes a testing framework ([unit tests](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/unit-testing/), [back tests](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/backtesting/)). | ✅ | `tests/` contains unit, functional and back tests. |
| Repository includes dependency information (e.g. [requirements.txt](https://pip.pypa.io/en/stable/user_guide/#requirements-files), [PipFile](https://github.com/pypa/pipfile/blob/main/README.rst), [environment.yml](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/virtual-environments/conda/)). | ✅ | `environment.yaml` |
| [Logs](https://nhsdigital.github.io/rap-community-of-practice/training_resources/python/logging-and-error-handling/) are automatically recorded by the pipeline to ensure outputs are as expected. | ✅ | - |
| Data is handled and output in a [Tidy data format](https://medium.com/@kimrodrikwa/untidy-data-a90b6e3ebe4c). | ✅ | Meets the requirements of tidy data (each variable forms a column, each observation forms a row, each type of observational unit forms a table). |
@@ -47,4 +47,4 @@ Meeting all of the above requirements, plus:
| Code is fully [packaged](https://packaging.python.org/en/latest/). | ✅ | With thanks to [Joshua Cook](https://joshuacook.netlify.app/posts/2024-07-27_python-data-analysis-org/), whose blog post I used to help me structure this analysis as a package. |
| Repository automatically runs tests etc. via CI/CD or a different integration/deployment tool e.g. [GitHub Actions](https://docs.github.com/en/actions). | ✅ | `.github/workflows/tests.yaml` |
| Process runs based on event-based triggers (e.g., new data in database) or on a schedule. | N/A | - |
-| Changes to the RAP are clearly signposted. E.g. a changelog in the package, releases etc. (See gov.uk info on [Semantic Versioning](https://github.com/alphagov/govuk-frontend/blob/main/docs/contributing/versioning.md)). | ✅ | `CHANGELOG.md` and GitHub releases |
\ No newline at end of file
+| Changes to the RAP are clearly signposted. E.g. a changelog in the package, releases etc. (See gov.uk info on [Semantic Versioning](https://github.com/alphagov/govuk-frontend/blob/main/docs/contributing/versioning.md)). | ✅ | `CHANGELOG.md` and GitHub releases. |
\ No newline at end of file
diff --git a/images/replications_statistics.drawio b/images/replications_statistics.drawio
index a800639..b21895e 100644
--- a/images/replications_statistics.drawio
+++ b/images/replications_statistics.drawio
@@ -1,20 +1,20 @@
-
+
-
+
-
+
-
+
-
+
@@ -38,71 +38,71 @@
-
-
+
+
-
+
-
-
+
+
-
+
-
+
-
-
+
+
-
+
-
+
-
-
+
+
-
-
+
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -110,7 +110,7 @@
-
+
@@ -118,25 +118,25 @@
-
+
-
+
-
+
-
+
-
+
-
+
-
+
diff --git a/images/replications_statistics.png b/images/replications_statistics.png
index 8d5e8e4..f48f7c0 100644
Binary files a/images/replications_statistics.png and b/images/replications_statistics.png differ
diff --git a/notebooks/analysis.ipynb b/notebooks/analysis.ipynb
index 1b408e2..9d4c5d5 100644
--- a/notebooks/analysis.ipynb
+++ b/notebooks/analysis.ipynb
@@ -323,7 +323,7 @@
"
\n",
" \n",
@@ -686,19 +906,19 @@
""
],
"text/plain": [
- " replications data cumulative_mean stdev lower_ci upper_ci \\\n",
- "35 36 0.490657 0.496990 0.007191 0.494557 0.499423 \n",
- "36 37 0.506302 0.497242 0.007254 0.494823 0.499660 \n",
- "37 38 0.508797 0.497546 0.007396 0.495115 0.499977 \n",
- "38 39 0.501514 0.497647 0.007326 0.495273 0.500022 \n",
- "39 40 0.504045 0.497807 0.007302 0.495472 0.500143 \n",
+ " replications data cumulative_mean stdev lower_ci upper_ci \\\n",
+ "0 1 0.501437 0.501437 NaN NaN NaN \n",
+ "1 2 0.510655 0.506046 NaN NaN NaN \n",
+ "2 3 0.496294 0.502796 0.007276 0.484720 0.520871 \n",
+ "3 4 0.500133 0.502130 0.006088 0.492442 0.511818 \n",
+ "4 5 0.498171 0.501338 0.005562 0.494432 0.508244 \n",
"\n",
- " deviation metric \n",
- "35 0.004896 mean_nurse_utilisation \n",
- "36 0.004864 mean_nurse_utilisation \n",
- "37 0.004886 mean_nurse_utilisation \n",
- "38 0.004772 mean_nurse_utilisation \n",
- "39 0.004691 mean_nurse_utilisation "
+ " deviation metric \n",
+ "0 NaN mean_nurse_utilisation \n",
+ "1 NaN mean_nurse_utilisation \n",
+ "2 0.035950 mean_nurse_utilisation \n",
+ "3 0.019294 mean_nurse_utilisation \n",
+ "4 0.013776 mean_nurse_utilisation "
]
},
"metadata": {},
@@ -713,8 +933,8 @@
" verbose=True\n",
")\n",
"\n",
- "display(cumulative.head())\n",
- "display(cumulative.tail())"
+ "for metric in METRICS:\n",
+ " display(cumulative[cumulative['metric'] == metric].head())"
]
},
{
@@ -725,7 +945,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -734,7 +954,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -743,7 +963,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -765,7 +985,9 @@
"source": [
"## Automated detection of appropriate number of replications\n",
"\n",
- "Run the algorithm (which will run model with increasing reps) for a few different metrics."
+ "Run the algorithm (which will run model with increasing reps) for a few different metrics.\n",
+ "\n",
+ "The `mean_q_time_nurse` is higher here due to our lookahead - as the deviation at replication is 4 is not maintained. In the functions above, we could also alter them by setting a higher `min_rep`, to make them look a bit later along for when deviation is met after 4."
]
},
{
@@ -773,21 +995,11 @@
"execution_count": 11,
"metadata": {},
"outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "/home/amy/Documents/stars/rap_template_python_des/simulation/replications.py:460: UserWarning:\n",
- "\n",
- "WARNING: the replications did not reach the desired precision for the following metrics: ['mean_q_time_nurse'].\n",
- "\n"
- ]
- },
{
"data": {
"text/plain": [
"{'mean_time_with_nurse': 3,\n",
- " 'mean_q_time_nurse': None,\n",
+ " 'mean_q_time_nurse': 30,\n",
" 'mean_nurse_utilisation': 3}"
]
},
@@ -1257,9 +1469,53 @@
" \n",
"
\n",
"
39
\n",
+ "
32
\n",
+ "
0.516598
\n",
+ "
0.489344
\n",
+ "
0.062403
\n",
+ "
0.466845
\n",
+ "
0.511843
\n",
+ "
0.045977
\n",
+ "
mean_q_time_nurse
\n",
+ "
\n",
+ "
\n",
+ "
40
\n",
+ "
33
\n",
+ "
0.529421
\n",
+ "
0.490559
\n",
+ "
0.061816
\n",
+ "
0.468640
\n",
+ "
0.512477
\n",
+ "
0.044681
\n",
+ "
mean_q_time_nurse
\n",
+ "
\n",
+ "
\n",
+ "
41
\n",
+ "
34
\n",
+ "
0.575176
\n",
+ "
0.493047
\n",
+ "
0.062578
\n",
+ "
0.471213
\n",
+ "
0.514882
\n",
+ "
0.044285
\n",
+ "
mean_q_time_nurse
\n",
+ "
\n",
+ "
\n",
+ "
42
\n",
+ "
35
\n",
+ "
0.426855
\n",
+ "
0.491156
\n",
+ "
0.062658
\n",
+ "
0.469632
\n",
+ "
0.512680
\n",
+ "
0.043822
\n",
+ "
mean_q_time_nurse
\n",
+ "
\n",
+ "
\n",
+ "
43
\n",
"
1
\n",
- "
0.501305
\n",
- "
0.501305
\n",
+ "
0.501437
\n",
+ "
0.501437
\n",
"
NaN
\n",
"
NaN
\n",
"
NaN
\n",
@@ -1267,10 +1523,10 @@
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
40
\n",
+ "
44
\n",
"
2
\n",
- "
0.510607
\n",
- "
0.505956
\n",
+ "
0.510655
\n",
+ "
0.506046
\n",
"
NaN
\n",
"
NaN
\n",
"
NaN
\n",
@@ -1278,69 +1534,69 @@
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
41
\n",
+ "
45
\n",
"
3
\n",
- "
0.496217
\n",
- "
0.502709
\n",
- "
0.007297
\n",
- "
0.484582
\n",
- "
0.520836
\n",
- "
0.036059
\n",
+ "
0.496294
\n",
+ "
0.502796
\n",
+ "
0.007276
\n",
+ "
0.484720
\n",
+ "
0.520871
\n",
+ "
0.035950
\n",
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
42
\n",
+ "
46
\n",
"
4
\n",
- "
0.500054
\n",
- "
0.502046
\n",
- "
0.006104
\n",
- "
0.492332
\n",
- "
0.511759
\n",
- "
0.019347
\n",
+ "
0.500133
\n",
+ "
0.502130
\n",
+ "
0.006088
\n",
+ "
0.492442
\n",
+ "
0.511818
\n",
+ "
0.019294
\n",
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
43
\n",
+ "
47
\n",
"
5
\n",
- "
0.498082
\n",
- "
0.501253
\n",
- "
0.005576
\n",
- "
0.494330
\n",
- "
0.508176
\n",
- "
0.013812
\n",
+ "
0.498171
\n",
+ "
0.501338
\n",
+ "
0.005562
\n",
+ "
0.494432
\n",
+ "
0.508244
\n",
+ "
0.013776
\n",
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
44
\n",
+ "
48
\n",
"
6
\n",
- "
0.497124
\n",
- "
0.500565
\n",
- "
0.005264
\n",
- "
0.495040
\n",
- "
0.506089
\n",
- "
0.011037
\n",
+ "
0.497362
\n",
+ "
0.500675
\n",
+ "
0.005233
\n",
+ "
0.495184
\n",
+ "
0.506167
\n",
+ "
0.010969
\n",
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
45
\n",
+ "
49
\n",
"
7
\n",
- "
0.491913
\n",
- "
0.499329
\n",
- "
0.005813
\n",
- "
0.493953
\n",
- "
0.504704
\n",
- "
0.010766
\n",
+ "
0.492170
\n",
+ "
0.499460
\n",
+ "
0.005758
\n",
+ "
0.494135
\n",
+ "
0.504786
\n",
+ "
0.010662
\n",
"
mean_nurse_utilisation
\n",
"
\n",
"
\n",
- "
46
\n",
+ "
50
\n",
"
8
\n",
"
0.505458
\n",
- "
0.500095
\n",
- "
0.005801
\n",
- "
0.495245
\n",
- "
0.504945
\n",
- "
0.009698
\n",
+ "
0.500210
\n",
+ "
0.005737
\n",
+ "
0.495414
\n",
+ "
0.505006
\n",
+ "
0.009589
\n",
"
mean_nurse_utilisation
\n",
"
\n",
" \n",
@@ -1388,14 +1644,18 @@
"36 29 0.406899 0.486416 0.064906 0.461727 0.511105 \n",
"37 30 0.510818 0.487230 0.063933 0.463357 0.511103 \n",
"38 31 0.525523 0.488465 0.063233 0.465271 0.511659 \n",
- "39 1 0.501305 0.501305 NaN NaN NaN \n",
- "40 2 0.510607 0.505956 NaN NaN NaN \n",
- "41 3 0.496217 0.502709 0.007297 0.484582 0.520836 \n",
- "42 4 0.500054 0.502046 0.006104 0.492332 0.511759 \n",
- "43 5 0.498082 0.501253 0.005576 0.494330 0.508176 \n",
- "44 6 0.497124 0.500565 0.005264 0.495040 0.506089 \n",
- "45 7 0.491913 0.499329 0.005813 0.493953 0.504704 \n",
- "46 8 0.505458 0.500095 0.005801 0.495245 0.504945 \n",
+ "39 32 0.516598 0.489344 0.062403 0.466845 0.511843 \n",
+ "40 33 0.529421 0.490559 0.061816 0.468640 0.512477 \n",
+ "41 34 0.575176 0.493047 0.062578 0.471213 0.514882 \n",
+ "42 35 0.426855 0.491156 0.062658 0.469632 0.512680 \n",
+ "43 1 0.501437 0.501437 NaN NaN NaN \n",
+ "44 2 0.510655 0.506046 NaN NaN NaN \n",
+ "45 3 0.496294 0.502796 0.007276 0.484720 0.520871 \n",
+ "46 4 0.500133 0.502130 0.006088 0.492442 0.511818 \n",
+ "47 5 0.498171 0.501338 0.005562 0.494432 0.508244 \n",
+ "48 6 0.497362 0.500675 0.005233 0.495184 0.506167 \n",
+ "49 7 0.492170 0.499460 0.005758 0.494135 0.504786 \n",
+ "50 8 0.505458 0.500210 0.005737 0.495414 0.505006 \n",
"\n",
" deviation metric \n",
"0 NaN mean_time_with_nurse \n",
@@ -1437,14 +1697,18 @@
"36 0.050757 mean_q_time_nurse \n",
"37 0.048997 mean_q_time_nurse \n",
"38 0.047484 mean_q_time_nurse \n",
- "39 NaN mean_nurse_utilisation \n",
- "40 NaN mean_nurse_utilisation \n",
- "41 0.036059 mean_nurse_utilisation \n",
- "42 0.019347 mean_nurse_utilisation \n",
- "43 0.013812 mean_nurse_utilisation \n",
- "44 0.011037 mean_nurse_utilisation \n",
- "45 0.010766 mean_nurse_utilisation \n",
- "46 0.009698 mean_nurse_utilisation "
+ "39 0.045977 mean_q_time_nurse \n",
+ "40 0.044681 mean_q_time_nurse \n",
+ "41 0.044285 mean_q_time_nurse \n",
+ "42 0.043822 mean_q_time_nurse \n",
+ "43 NaN mean_nurse_utilisation \n",
+ "44 NaN mean_nurse_utilisation \n",
+ "45 0.035950 mean_nurse_utilisation \n",
+ "46 0.019294 mean_nurse_utilisation \n",
+ "47 0.013776 mean_nurse_utilisation \n",
+ "48 0.010969 mean_nurse_utilisation \n",
+ "49 0.010662 mean_nurse_utilisation \n",
+ "50 0.009589 mean_nurse_utilisation "
]
},
"metadata": {},
@@ -1453,7 +1717,9 @@
],
"source": [
"# Set up and run the algorithm\n",
- "analyser = ReplicationsAlgorithm()\n",
+ "analyser = ReplicationsAlgorithm(\n",
+ " alpha=0.05, half_width_precision=0.05, initial_replications=3,\n",
+ " look_ahead=5, replication_budget=1000)\n",
"solutions, cumulative = analyser.select(runner=Runner(Param()),\n",
" metrics=METRICS)\n",
"\n",
@@ -1477,7 +1743,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -1486,7 +1752,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -1495,7 +1761,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -1523,13 +1789,13 @@
"\n",
"`OnlineStatistics` is designed to:\n",
"\n",
- "* Keep a **running mean and variance**.\n",
+ "* Keep a **running mean and sum of squares**.\n",
"* Return **other statistics** based on these (e.g. standard deviation, confidence intervals).\n",
- "* **Call the `update()`** method of `ReplicationTabulizer` whenever a new data point is processed by `OnlineStatistics.\n",
+ "* **Call the `update()`** method of `ReplicationTabulizer` whenever a new data point is processed by `OnlineStatistics`.\n",
"\n",
- "#### How do the running mean and variance calculations work?\n",
+ "#### How do the running mean and sum of squares calculations work?\n",
"\n",
- "The running mean and variance are updated iteratively with each new data point provided, **without requiring the storage of all previous data points**. This approach is called \"online\" because we only need to store a small set of values (such as the current mean and variance), rather than maintaining an entire list of past values.\n",
+ "The running mean and sum of squares are updated iteratively with each new data point provided, **without requiring the storage of all previous data points**. This approach is called \"online\" because we only need to store a small set of values (such as the current mean and sum of squares), rather than maintaining an entire list of past values.\n",
"\n",
"For example, focusing on the mean, normally you would need to store all the data points in a list and sum them up to compute the average - for example:\n",
"\n",
@@ -1550,9 +1816,9 @@
"- $x_n$ is the new data point.\n",
"- $\\mu_{n-1}$ is the running mean before the new data point.\n",
"\n",
- "The key thing to notice here is that, to update the mean, **all we needed to know was the current running mean, the new data point, and the number of data points**. A similar formula exists for calculating variance.\n",
+ "The key thing to notice here is that, to update the mean, **all we needed to know was the current running mean, the new data point, and the number of data points**. A similar formula exists for calculating the sum of squares.\n",
"\n",
- "In our code, every time we call `update()` with a new data point, the mean and variance are adjusted, with `n` keeping track of the number of data points so far.\n",
+ "In our code, every time we call `update()` with a new data point, the mean and sum of squares are adjusted, with `n` keeping track of the number of data points so far.\n",
"\n",
"```\n",
"class OnlineStatistics:\n",
@@ -1567,7 +1833,7 @@
"\n",
"#### What other statistics can it calculate?\n",
" \n",
- "`OnlineStatistics` then has a series of methods which can return other statistics based on the current mean, variance, and count:\n",
+ "`OnlineStatistics` then has a series of methods which can return other statistics based on the current mean, sum of squares, and count:\n",
"\n",
"* Variance\n",
"* Standard deviation\n",
@@ -1632,7 +1898,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "Notebook run time: 0m 11s\n"
+ "Notebook run time: 0m 13s\n"
]
}
],
@@ -1647,8 +1913,22 @@
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "template-des",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.1"
}
},
"nbformat": 4,
diff --git a/notebooks/choosing_warmup.ipynb b/notebooks/choosing_warmup.ipynb
index 8bee422..65c4820 100644
--- a/notebooks/choosing_warmup.ipynb
+++ b/notebooks/choosing_warmup.ipynb
@@ -300,7 +300,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -345,8 +345,22 @@
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "template-des",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.1"
}
},
"nbformat": 4,
diff --git a/notebooks/generate_exp_results.ipynb b/notebooks/generate_exp_results.ipynb
index 9915d9f..5d1aa70 100644
--- a/notebooks/generate_exp_results.ipynb
+++ b/notebooks/generate_exp_results.ipynb
@@ -351,7 +351,7 @@
"
\n",
" \n",
@@ -946,11 +946,11 @@
"3 4 9.938504 9.941599 0.089900 9.798549 10.084650 \n",
"4 5 10.016611 9.956602 0.084775 9.851339 10.061864 \n",
".. ... ... ... ... ... ... \n",
- "35 36 0.492981 0.497883 0.007143 0.495466 0.500300 \n",
- "36 37 0.508118 0.498159 0.007242 0.495745 0.500574 \n",
- "37 38 0.502153 0.498264 0.007172 0.495907 0.500622 \n",
- "38 39 0.499857 0.498305 0.007082 0.496010 0.500601 \n",
- "39 40 0.512021 0.498648 0.007319 0.496307 0.500989 \n",
+ "35 36 0.493083 0.498001 0.007179 0.495572 0.500430 \n",
+ "36 37 0.508147 0.498275 0.007272 0.495851 0.500700 \n",
+ "37 38 0.502386 0.498383 0.007204 0.496015 0.500751 \n",
+ "38 39 0.499974 0.498424 0.007113 0.496118 0.500730 \n",
+ "39 40 0.512111 0.498766 0.007348 0.496417 0.501116 \n",
"\n",
" deviation metric \n",
"0 NaN mean_time_with_nurse \n",
@@ -959,11 +959,11 @@
"3 0.014389 mean_time_with_nurse \n",
"4 0.010572 mean_time_with_nurse \n",
".. ... ... \n",
- "35 0.004854 mean_nurse_utilisation \n",
- "36 0.004847 mean_nurse_utilisation \n",
- "37 0.004731 mean_nurse_utilisation \n",
- "38 0.004607 mean_nurse_utilisation \n",
- "39 0.004694 mean_nurse_utilisation \n",
+ "35 0.004877 mean_nurse_utilisation \n",
+ "36 0.004866 mean_nurse_utilisation \n",
+ "37 0.004751 mean_nurse_utilisation \n",
+ "38 0.004626 mean_nurse_utilisation \n",
+ "39 0.004711 mean_nurse_utilisation \n",
"\n",
"[120 rows x 8 columns]"
]
@@ -1032,7 +1032,16 @@
],
"metadata": {
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.1"
}
},
"nbformat": 4,
diff --git a/notebooks/logs.ipynb b/notebooks/logs.ipynb
index be959ed..51a2066 100644
--- a/notebooks/logs.ipynb
+++ b/notebooks/logs.ipynb
@@ -84,6 +84,7 @@
"\u001b[39m \u001b[0m\u001b[32m'nurse_consult_count'\u001b[0m\u001b[39m: \u001b[0m\u001b[1;36m0\u001b[0m\u001b[39m,\u001b[0m \n",
"\u001b[39m \u001b[0m\u001b[32m'nurse_consult_time_dist'\u001b[0m\u001b[39m: \u001b[0m\u001b[32m''\u001b[0m\u001b[39m,\u001b[0m \n",
"\u001b[39m \u001b[0m\u001b[32m'nurse_time_used'\u001b[0m\u001b[39m: \u001b[0m\u001b[1;36m0\u001b[0m\u001b[39m,\u001b[0m \n",
+ "\u001b[39m \u001b[0m\u001b[32m'nurse_time_used_correction'\u001b[0m\u001b[39m: \u001b[0m\u001b[1;36m0\u001b[0m\u001b[39m,\u001b[0m \n",
"\u001b[39m \u001b[0m\u001b[32m'param'\u001b[0m\u001b[39m: \u001b[0m\u001b[32m''\u001b[0m\u001b[39m,\u001b[0m \n",
"\u001b[39m \u001b[0m\u001b[32m'patient_inter_arrival_dist'\u001b[0m\u001b[39m: \u001b[0m\u001b[32m'\u001b[0m\u001b[32m'\u001b[0m, \n",
" \u001b[32m'patients'\u001b[0m: \u001b[1m[\u001b[0m\u001b[1m]\u001b[0m, \n",
@@ -180,6 +181,14 @@
"\u001b[1;36m28.667\u001b[0m: 🔶 WU Patient \u001b[1;36m4\u001b[0m is seen by nurse after \u001b[1;36m6.527\u001b[0m. Consultation length: \u001b[1;36m5.295\u001b[0m. \n"
]
},
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[1;36m28.667\u001b[0m: 🛠 Patient \u001b[1;36m4\u001b[0m starts consultation with \u001b[1;36m1.333\u001b[0m left of warm-up \u001b[1m(\u001b[0mwhich is \u001b[1;36m30.000\u001b[0m\u001b[1m)\u001b[0m. As their consultation is for \n",
+ "\u001b[1;36m5.295\u001b[0m, they will exceed warmup by \u001b[1;36m3.962\u001b[0m, so we correct for this. \n"
+ ]
+ },
{
"name": "stdout",
"output_type": "stream",
@@ -400,8 +409,22 @@
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "template-des",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.1"
}
},
"nbformat": 4,
diff --git a/notebooks/time_weighted_averages.ipynb b/notebooks/time_weighted_averages.ipynb
index 80e4497..e7ae21c 100644
--- a/notebooks/time_weighted_averages.ipynb
+++ b/notebooks/time_weighted_averages.ipynb
@@ -72,7 +72,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -238,7 +238,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -311,7 +311,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -362,7 +362,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -534,7 +534,7 @@
{
"data": {
"image/svg+xml": [
- ""
+ ""
]
},
"metadata": {},
@@ -624,6 +624,11 @@
"\\text{mean\\_nurse\\_utilisation} = \\frac{\\text{model.nurse\\_time\\_used}}{\\text{param.number\\_of\\_nurses} \\times \\text{param.data\\_collection\\_period}}\n",
"$$\n",
"\n",
+ "This requires some corrections:\n",
+ "\n",
+ "* When nurse time is less than the remaining simulation time, and;\n",
+ "* When consultations overlap between the warm-up and data collection periods.\n",
+ "\n",
"### Run results `mean_nurse_utilisation_tw`\n",
"\n",
"Nurses are set up using `MonitoredResource`. Every time a nurse is seized or released, the \"area under the curve\" for resources busy is updated (see `time_weighted_averages.ipynb` for more detailed explanation of how this method works).\n",
@@ -650,12 +655,10 @@
"\n",
"### Comparison of methods\n",
"\n",
- "Ultimately, we use either `mean_nurse_utilisation` or `mean_nurse_utilisation_tw`, and each will give pretty much the same result.\n",
- "\n",
"| Metric | Pros ✅ | Cons 🟥 |\n",
"| - | - | - |\n",
- "| Run results `mean_nurse_utilisation` | Simple to implement. | Requires correction for when time with nurse is less than remaining simulation time - `min(patient.time_with_nurse, remaining_time)`. |\n",
- "| Run results `mean_nurse_utilisation_tw` | Doesn't require correction for when time with nurse is less than remaining time (though, as part of how this method is introduced, do have call to `update_time_weighted_stats()` at end of simulation). | More complex to implement and understand the method (`time_weighted_averages.ipynb` provided to support). |\n",
+ "| Run results `mean_nurse_utilisation` | Easy to understand. | Requires correction for when time with nurse is less than remaining simulation time - `min(patient.time_with_nurse, remaining_time)`.
Requires correction for consultations that overlap the warm up and data collection periods, as these are otherwise excluded from the recorded nurse time used. |\n",
+ "| Run results `mean_nurse_utilisation_tw` | Doesn't require correction for overlap between warm-up and data collection period.
Could add other metrics (e.g. max queue, number of times queue exceeds X length) using `MonitoredResource`. | Requires correction for when time with nurse is less than remaining simulation time (by calling `update_time_weighted_stats()` at the end of the simulation).
More complex to implement and understand the method (`time_weighted_averages.ipynb` provided to support). |\n",
"| Interval audit `utilisation` | Helpful for monitoring over time, such as when choosing appropriate warm-up length. | Less accurate for overall mean, can miss details if interval audit intervals are sparse. |"
]
},
@@ -692,8 +695,22 @@
}
],
"metadata": {
+ "kernelspec": {
+ "display_name": "template-des",
+ "language": "python",
+ "name": "python3"
+ },
"language_info": {
- "name": "python"
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.13.1"
}
},
"nbformat": 4,
diff --git a/outputs/choose_param_cores.png b/outputs/choose_param_cores.png
index 4e81d5f..a09e6c1 100644
Binary files a/outputs/choose_param_cores.png and b/outputs/choose_param_cores.png differ
diff --git a/outputs/choose_reps_mean_nurse_utilisation.png b/outputs/choose_reps_mean_nurse_utilisation.png
index 2b283b1..c3c4ef3 100644
Binary files a/outputs/choose_reps_mean_nurse_utilisation.png and b/outputs/choose_reps_mean_nurse_utilisation.png differ
diff --git a/outputs/choose_reps_mean_q_time_nurse.png b/outputs/choose_reps_mean_q_time_nurse.png
index ad72732..0fcd040 100644
Binary files a/outputs/choose_reps_mean_q_time_nurse.png and b/outputs/choose_reps_mean_q_time_nurse.png differ
diff --git a/outputs/choose_reps_mean_time_with_nurse.png b/outputs/choose_reps_mean_time_with_nurse.png
index d13b60c..00e1559 100644
Binary files a/outputs/choose_reps_mean_time_with_nurse.png and b/outputs/choose_reps_mean_time_with_nurse.png differ
diff --git a/outputs/example_overall.csv b/outputs/example_overall.csv
index 93a61ac..29c2dda 100644
--- a/outputs/example_overall.csv
+++ b/outputs/example_overall.csv
@@ -1,5 +1,5 @@
arrivals,mean_q_time_nurse,mean_time_with_nurse,mean_nurse_utilisation,mean_nurse_utilisation_tw,mean_nurse_q_length,count_unseen,mean_q_time_unseen
-10755.161290322581,0.48846497726698573,9.977252688905901,0.49668912005576155,0.49682639332763356,0.12166204756137887,0.22580645161290322,2.116866544017588
-92.85942665276139,0.06323314614048788,0.10417538014463167,0.007417351087622342,0.007381804514793688,0.016024525175284875,0.8045569140570411,1.217050008063599
-10721.100177638484,0.4652708720081293,9.939040850970656,0.4939684137115607,0.4941187255716116,0.11578420427309796,-0.06930740702572605,-0.9064532780617758
-10789.222403006679,0.5116590825258421,10.015464526841146,0.4994098263999624,0.49953406108365556,0.12753989084965978,0.5209203102515325,5.140186366096952
+10755.161290322581,0.48846497726698573,9.977252688905901,0.49682639332763373,0.49682639332763356,0.12166204756137887,0.22580645161290322,2.116866544017588
+92.85942665276139,0.06323314614048788,0.10417538014463167,0.007381804514793757,0.007381804514793688,0.016024525175284875,0.8045569140570411,1.217050008063599
+10721.100177638484,0.4652708720081293,9.939040850970656,0.4941187255716117,0.4941187255716116,0.11578420427309796,-0.06930740702572605,-0.9064532780617758
+10789.222403006679,0.5116590825258421,10.015464526841146,0.4995340610836558,0.49953406108365556,0.12753989084965978,0.5209203102515325,5.140186366096952
diff --git a/outputs/example_run.csv b/outputs/example_run.csv
index 99e9127..cb9ce00 100644
--- a/outputs/example_run.csv
+++ b/outputs/example_run.csv
@@ -1,32 +1,32 @@
run_number,scenario,arrivals,mean_q_time_nurse,mean_time_with_nurse,mean_nurse_utilisation,mean_nurse_utilisation_tw,mean_nurse_q_length,count_unseen,mean_q_time_unseen
-0,0,10790,0.5185321805418852,10.038295078961,0.501305063842817,0.501437422224387,0.12951301453812364,0,
-1,0,10787,0.49626891151454416,10.224439362484883,0.5106066083477981,0.5106553500497837,0.12391788769693028,0,
-2,0,10827,0.49710092951617946,9.901036511215478,0.49621651283902174,0.4962939811036556,0.12458592045999248,0,
-3,0,10800,0.5049693589409172,10.00234076744067,0.500053900736048,0.5001329378691672,0.1262847539976046,0,
-4,0,10748,0.4746767027717812,10.011372602107304,0.49808181197878676,0.49817086137571487,0.11809780558775704,0,
-5,0,10788,0.46710374089441664,9.95776713973189,0.4971236342726268,0.49736181208654845,0.11664618418446682,0,
-6,0,10747,0.45256195689377055,9.888304651236542,0.49191280445244473,0.49217026037847406,0.11258526274854982,0,
+0,0,10790,0.5185321805418852,10.038295078961,0.5014374222243866,0.501437422224387,0.12951301453812364,0,
+1,0,10787,0.49626891151454416,10.224439362484883,0.5106553500497839,0.5106553500497837,0.12391788769693028,0,
+2,0,10827,0.49710092951617946,9.901036511215478,0.4962939811036566,0.4962939811036556,0.12458592045999248,0,
+3,0,10800,0.5049693589409172,10.00234076744067,0.5001329378691676,0.5001329378691672,0.1262847539976046,0,
+4,0,10748,0.4746767027717812,10.011372602107304,0.49817086137571637,0.49817086137571487,0.11809780558775704,0,
+5,0,10788,0.46710374089441664,9.95776713973189,0.49736181208654817,0.49736181208654845,0.11664618418446682,0,
+6,0,10747,0.45256195689377055,9.888304651236542,0.49217026037847267,0.49217026037847406,0.11258526274854982,0,
7,0,10719,0.5902687696908977,10.190850645468494,0.5054577429842375,0.5054577429842357,0.14646043847955398,0,
-8,0,10836,0.5240454126388586,10.129095063656315,0.5080737041057033,0.5080937133949796,0.13144805767024706,0,
-9,0,10742,0.4566419736598521,9.989609627326171,0.4968252579662247,0.4969467682534455,0.11354884944558173,0,
-10,0,10854,0.5380936763354897,9.990544391961002,0.501557358017608,0.5017157196767453,0.1352149673627936,4,0.742550458155165
-11,0,10724,0.3411799813085308,9.851412121356372,0.48893298596283,0.48915031237879747,0.08469477128594176,0,
-12,0,10854,0.448186069405559,9.87792855120413,0.4962617698686973,0.4963750893055759,0.1126067499381467,0,
-13,0,10693,0.3917601910319952,9.78946825875932,0.48451415471399123,0.4847880074401236,0.09696971580335938,0,
-14,0,10917,0.5675635450599201,10.049891549067578,0.5079382687091245,0.5081148347410629,0.14342803753285066,0,
-15,0,10833,0.46091608505493414,9.984502776099735,0.500584271571499,0.5006534951078692,0.11558110993981717,0,
-16,0,10788,0.47369062632526554,10.015265603108288,0.500203057432327,0.5005678829416623,0.11830834089455325,0,
-17,0,10690,0.4761740903833518,9.983169123880916,0.4939836984126524,0.49413194344269457,0.11783104227310257,0,
-18,0,10486,0.3575944290378721,9.868477415217479,0.47904036188432186,0.4791626399391362,0.0867994255298872,0,
-19,0,10815,0.42851831043970895,9.898975761536875,0.49552036300957525,0.49563482014119953,0.10727836868994102,0,
-20,0,10622,0.5570755282298945,9.825137295330315,0.4830956064715635,0.4832761845768154,0.13697352455689674,0,
-21,0,10830,0.4869099264349747,9.963018748210395,0.49940514012110293,0.49961937437538423,0.1220656135021013,0,
-22,0,10676,0.6256309746504354,9.859948772811755,0.4872098375119968,0.4873663905994784,0.1548525174960317,2,3.0582925444905413
-23,0,10876,0.5490427351511018,9.993037203244715,0.5031287030423103,0.5031867920065705,0.13822659230331905,0,
-24,0,10850,0.5006743147616936,9.953653118735623,0.49990611902450094,0.499963353519239,0.1257480628510272,0,
-25,0,10670,0.5369800995326406,10.132875835675495,0.5002864609169067,0.5003038278267509,0.1326291125466036,0,
-26,0,10693,0.4524202799705882,9.84769122075977,0.4874990761731514,0.4877214650981865,0.11198449198438656,0,
-27,0,10712,0.5245934485062953,10.025456050550394,0.49705168401583016,0.49735357612504555,0.1300797458425795,0,
-28,0,10577,0.4068991127092483,10.042976870835432,0.4917803998278993,0.49182810404657357,0.09962434988716942,0,
-29,0,10693,0.5108176843100298,9.964715022136751,0.49300069224470977,0.4931478310508971,0.12648639901093164,1,2.5497566294070566
-30,0,10773,0.5255232495739238,10.04357621597181,0.5008056712703005,0.5008356990964371,0.13105236036249726,0,
+8,0,10836,0.5240454126388586,10.129095063656315,0.5080937133949796,0.5080937133949796,0.13144805767024706,0,
+9,0,10742,0.4566419736598521,9.989609627326171,0.49694676825344614,0.4969467682534455,0.11354884944558173,0,
+10,0,10854,0.5380936763354897,9.990544391961002,0.5017157196767464,0.5017157196767453,0.1352149673627936,4,0.742550458155165
+11,0,10724,0.3411799813085308,9.851412121356372,0.4891503123787982,0.48915031237879747,0.08469477128594176,0,
+12,0,10854,0.448186069405559,9.87792855120413,0.4963750893055763,0.4963750893055759,0.1126067499381467,0,
+13,0,10693,0.3917601910319952,9.78946825875932,0.4847880074401233,0.4847880074401236,0.09696971580335938,0,
+14,0,10917,0.5675635450599201,10.049891549067578,0.5081148347410617,0.5081148347410629,0.14342803753285066,0,
+15,0,10833,0.46091608505493414,9.984502776099735,0.500653495107869,0.5006534951078692,0.11558110993981717,0,
+16,0,10788,0.47369062632526554,10.015265603108288,0.5005678829416618,0.5005678829416623,0.11830834089455325,0,
+17,0,10690,0.4761740903833518,9.983169123880916,0.4941319434426954,0.49413194344269457,0.11783104227310257,0,
+18,0,10486,0.3575944290378721,9.868477415217479,0.4791626399391365,0.4791626399391362,0.0867994255298872,0,
+19,0,10815,0.42851831043970895,9.898975761536875,0.4956348201411994,0.49563482014119953,0.10727836868994102,0,
+20,0,10622,0.5570755282298945,9.825137295330315,0.4832761845768153,0.4832761845768154,0.13697352455689674,0,
+21,0,10830,0.4869099264349747,9.963018748210395,0.4996193743753855,0.49961937437538423,0.1220656135021013,0,
+22,0,10676,0.6256309746504354,9.859948772811755,0.4873663905994785,0.4873663905994784,0.1548525174960317,2,3.0582925444905413
+23,0,10876,0.5490427351511018,9.993037203244715,0.5031867920065699,0.5031867920065705,0.13822659230331905,0,
+24,0,10850,0.5006743147616936,9.953653118735623,0.4999633535192411,0.499963353519239,0.1257480628510272,0,
+25,0,10670,0.5369800995326406,10.132875835675495,0.5003038278267512,0.5003038278267509,0.1326291125466036,0,
+26,0,10693,0.4524202799705882,9.84769122075977,0.4877214650981865,0.4877214650981865,0.11198449198438656,0,
+27,0,10712,0.5245934485062953,10.025456050550394,0.49735357612504494,0.49735357612504555,0.1300797458425795,0,
+28,0,10577,0.4068991127092483,10.042976870835432,0.49182810404657396,0.49182810404657357,0.09962434988716942,0,
+29,0,10693,0.5108176843100298,9.964715022136751,0.49314783105089627,0.4931478310508971,0.12648639901093164,1,2.5497566294070566
+30,0,10773,0.5255232495739238,10.04357621597181,0.5008356990964372,0.5008356990964371,0.13105236036249726,0,
diff --git a/outputs/logs/log_example.log b/outputs/logs/log_example.log
index 8723e5b..bb1006d 100644
--- a/outputs/logs/log_example.log
+++ b/outputs/logs/log_example.log
@@ -6,6 +6,7 @@
'nurse_consult_count': 0,
'nurse_consult_time_dist': '',
'nurse_time_used': 0,
+ 'nurse_time_used_correction': 0,
'param': '',
'patient_inter_arrival_dist': '',
'patients': [],
@@ -34,6 +35,7 @@
23.023: 🔸 WU Patient 5 arrives at: 23.023.
25.025: 🔶 WU Patient 3 is seen by nurse after 3.789. Consultation length: 3.642.
28.667: 🔶 WU Patient 4 is seen by nurse after 6.527. Consultation length: 5.295.
+28.667: 🛠 Patient 4 starts consultation with 1.333 left of warm-up (which is 30.000). As their consultation is for 5.295, they will exceed warmup by 3.962, so we correct for this.
30.000: ──────────
30.000: Warm up complete.
30.000: ──────────
diff --git a/outputs/scenario_nurse_util.png b/outputs/scenario_nurse_util.png
index fd143fd..408ea19 100644
Binary files a/outputs/scenario_nurse_util.png and b/outputs/scenario_nurse_util.png differ
diff --git a/run_notebooks.sh b/run_notebooks.sh
index 152e4dd..8b52aa6 100644
--- a/run_notebooks.sh
+++ b/run_notebooks.sh
@@ -7,8 +7,10 @@ for nb in notebooks/*.ipynb; do
# Execute and update the notebook in-place
# With some processing to remove metadata created by nbconvert
if python -m jupyter nbconvert --to notebook --inplace --execute \
- -TagRemovePreprocessor.remove_input_tags="execution" \
- --ClearMetadataPreprocessor.enabled=True "$nb"; then
+ --ClearMetadataPreprocessor.enabled=True \
+ --ClearMetadataPreprocessor.clear_notebook_metadata=False \
+ --ClearMetadataPreprocessor.preserve_cell_metadata_mask="kernelspec" \
+ "$nb"; then
echo "✅ Successfully processed: $nb"
else
echo "❌ Error processing: $nb"
diff --git a/simulation/model.py b/simulation/model.py
index 1a1a996..62495de 100644
--- a/simulation/model.py
+++ b/simulation/model.py
@@ -361,6 +361,12 @@ class Model:
List containing the generated patient objects.
nurse_time_used (float):
Total time the nurse resources have been used in minutes.
+ nurse_time_used_correction (float):
+ Correction for nurse time used with a warm-up period. Without
+ correction, it will be underestimated, as patients who start their
+ time with the nurse during the warm-up period and finish it during
+ the data collection period will not be included in the recorded
+ time.
nurse_consult_count (int):
Count of patients seen by nurse, using to calculate running mean
wait time.
@@ -406,6 +412,7 @@ def __init__(self, param, run_number):
# Initialise attributes to store results
self.patients = []
self.nurse_time_used = 0
+ self.nurse_time_used_correction = 0
self.nurse_consult_count = 0
self.running_mean_nurse_wait = 0
self.audit_list = []
@@ -536,6 +543,31 @@ def attend_clinic(self, patient):
self.nurse_time_used += min(
patient.time_with_nurse, remaining_time)
+ # If still in the warm-up period, find if the patient time with
+ # nurse will go beyond end (if time_exceeding_warmup is positive) -
+ # and, in which case, save that to nurse_time_used_correction
+ # (ensuring to correct it if would exceed end of simulation).
+ remaining_warmup = self.param.warm_up_period - self.env.now
+ if remaining_warmup > 0:
+ time_exceeding_warmup = (patient.time_with_nurse -
+ remaining_warmup)
+ if time_exceeding_warmup > 0:
+ self.nurse_time_used_correction += min(
+ time_exceeding_warmup,
+ self.param.data_collection_period)
+ # Logging message
+ self.param.logger.log(
+ sim_time=self.env.now,
+ msg=(f'\U0001F6E0 Patient {patient.patient_id} ' +
+ 'starts consultation with ' +
+ f'{remaining_warmup:.3f} left of warm-up (which' +
+ f' is {self.param.warm_up_period:.3f}). As ' +
+ 'their consultation is for ' +
+ f'{patient.time_with_nurse:.3f}, they will ' +
+ f'exceed warmup by {time_exceeding_warmup:.3f},' +
+ ' so we correct for this.')
+ )
+
# Pass time spent with nurse
yield self.env.timeout(patient.time_with_nurse)
@@ -590,6 +622,12 @@ def warm_up_complete(self):
# Reset results collection variables
self.init_results_variables()
+ # Correct nurse_time_used, adding the remaining time of patients
+ # who were partway through their consultation during the warm-up
+ # period (i.e. patients still in consultation as enter the
+ # data collection period).
+ self.nurse_time_used += self.nurse_time_used_correction
+
# If there was a warm-up period, log that this time has passed so
# can distinguish between patients before and after warm-up in logs
if self.param.warm_up_period > 0:
@@ -609,7 +647,7 @@ def run(self):
self.param.data_collection_period)
# Schedule process which will reset results when warm-up period ends
- # (or does nothing if these is no warm-up)
+ # (or does nothing if there is no warm-up)
self.env.process(self.warm_up_complete())
# Schedule patient generator to run during simulation
diff --git a/simulation/replications.py b/simulation/replications.py
index 09eca18..757784a 100644
--- a/simulation/replications.py
+++ b/simulation/replications.py
@@ -268,12 +268,7 @@ class ReplicationsAlgorithm:
The target half width precision for the algorithm (i.e. percentage
deviation of the confidence interval from the mean).
initial_replications (int):
- Number of initial replications to perform. Note that the minimum
- solution will be the value of initial_replications (i.e. if require
- 20 initial replications but was resolved in 5, solution output
- will still be 20). Although, if initial_replications < 3, solution
- will still be at least 3, as that is the minimum required to
- calculate the confidence intervals.
+ Number of initial replications to perform.
look_ahead (int):
Minimum additional replications to look ahead to assess stability
of precision. When the number of replications is <= 100, the value
@@ -319,9 +314,11 @@ def valid_inputs(self):
"""
Checks validity of provided parameters.
"""
- for p in [self.initial_replications, self.look_ahead]:
- if not isinstance(p, int) or p < 0:
- raise ValueError(f'{p} must be a non-negative integer.')
+ for param_name in ['initial_replications', 'look_ahead']:
+ param_value = getattr(self, param_name)
+ if not isinstance(param_value, int) or param_value < 0:
+ raise ValueError(f'{param_name} must be a non-negative ',
+ f'integer, but provided {param_value}.')
if self.half_width_precision <= 0:
raise ValueError('half_width_precision must be greater than 0.')
@@ -334,7 +331,7 @@ def _klimit(self):
"""
Determines the number of additional replications to check after
precision is reached, scaling with total replications if they are
- greater than 100.
+ greater than 100. Rounded down to nearest integer.
Returns:
int:
@@ -342,6 +339,33 @@ def _klimit(self):
"""
return int((self.look_ahead / 100) * max(self.n, 100))
+ def find_position(self, lst):
+ """
+ Find the first position where element is below deviation, and this is
+ maintained through the lookahead period.
+
+ This is used to correct the ReplicationsAlgorithm, which cannot return
+ a solution below the initial_replications.
+
+ Returns:
+ int:
+ Minimum replications required to meet and maintain precision.
+ """
+ # Find the first non-None value in the list
+ start_index = pd.Series(lst).first_valid_index()
+
+ # Iterate through the list, stopping when at last point where we still
+ # have enough elements to look ahead
+ if start_index is not None:
+ for i in range(start_index, len(lst) - self.look_ahead + 1):
+ # Create slice of list with current value + lookahead
+ # Check if all fall below the desired deviation
+ if all(value < self.half_width_precision
+ for value in lst[i:i+self.look_ahead]):
+ # Add one, so it is the number of reps, as is zero-indexed
+ return i + 1
+ return None
+
# pylint: disable=too-many-branches
def select(self, runner, metrics):
"""
@@ -412,8 +436,8 @@ def select(self, runner, metrics):
if self._klimit() == 0:
solutions[metric]['solved'] = True
- # Whilst under replication budget + lookahead, and have not yet
- # got all metrics marked as solved = TRUE...
+ # Whilst have not yet got all metrics marked as solved = TRUE, and
+ # still under replication budget + lookahead...
while (
sum(1 for v in solutions.values()
if v['solved']) < len(metrics)
@@ -432,7 +456,7 @@ def select(self, runner, metrics):
# If it is not yet solved...
if not solutions[metric]['solved']:
- # Update the running mean and stdev for that metric
+ # Update the running statistics for that metric
stats[metric].update(results[metric])
# If precision has been achieved...
@@ -453,8 +477,18 @@ def select(self, runner, metrics):
# If precision was not achieved, ensure nreps is None
# (e.g. in cases where precision is lost after a success)
else:
+ solutions[metric]['target_met'] = 0
solutions[metric]['nreps'] = None
+ # Correction to result...
+ for metric, dictionary in solutions.items():
+ # Use find_position() to check for solution in initial replications
+ adj_nreps = self.find_position(observers[metric].dev)
+ # If there was a maintained solution, replace in solutions
+ if adj_nreps is not None and dictionary['nreps'] is not None:
+ if adj_nreps < dictionary['nreps']:
+ solutions[metric]['nreps'] = adj_nreps
+
# Extract minimum replications for each metric
nreps = {metric: value['nreps'] for metric, value in solutions.items()}
@@ -482,7 +516,7 @@ def confidence_interval_method(
param=Param(),
alpha=0.05,
desired_precision=0.05,
- min_rep=5,
+ min_rep=3,
verbose=False
):
"""
@@ -556,10 +590,11 @@ def confidence_interval_method(
# Get minimum number of replications where deviation is below target
try:
nreps = (
- results.iloc[min_rep:]
- .loc[results['deviation'] <= desired_precision]
- .iloc[0]
- .name
+ (results
+ .loc[results['replications'] >= min_rep]
+ .loc[results['deviation'] <= desired_precision]
+ .iloc[0]
+ .replications)
)
if verbose:
print(f'{metric}: Reached desired precision in {nreps} ' +
@@ -584,7 +619,7 @@ def confidence_interval_method(
def confidence_interval_method_simple(
- replications, metrics, param=Param(), desired_precision=0.05, min_rep=5,
+ replications, metrics, param=Param(), desired_precision=0.05, min_rep=3,
verbose=False
):
"""
@@ -657,11 +692,12 @@ def confidence_interval_method_simple(
# Get minimum number of replications where deviation is below target
try:
nreps = (
- cumulative.iloc[min_rep:]
- .loc[cumulative['deviation'] <= desired_precision]
- .iloc[0]
- .name
- ) + 1
+ (cumulative
+ .loc[cumulative['replications'] >= min_rep]
+ .loc[cumulative['deviation'] <= desired_precision]
+ .iloc[0]
+ .replications)
+ )
if verbose:
print(f'{metric}: Reached desired precision in {nreps} ' +
'replications.')
@@ -686,21 +722,21 @@ def confidence_interval_method_simple(
def plotly_confidence_interval_method(
- n_reps, conf_ints, metric_name, figsize=(1200, 400), file_path=None
+ conf_ints, metric_name, n_reps=None, figsize=(1200, 400), file_path=None
):
"""
Generates an interactive Plotly visualisation of confidence intervals
with increasing simulation replications.
Arguments:
- n_reps (int):
- The number of replications required to meet the desired precision.
conf_ints (pd.DataFrame):
A DataFrame containing confidence interval statistics, including
cumulative mean, upper/lower bounds, and deviations. As returned
by ReplicationTabulizer summary_table() method.
metric_name (str):
Name of metric being analysed.
+ n_reps (int, optional):
+ The number of replications required to meet the desired precision.
figsize (tuple, optional):
Plot dimensions in pixels (width, height).
file_path (str):
@@ -746,15 +782,16 @@ def plotly_confidence_interval_method(
)
# Vertical threshold line
- fig.add_shape(
- type='line',
- x0=n_reps,
- x1=n_reps,
- y0=0,
- y1=1,
- yref='paper',
- line={'color': 'red', 'dash': 'dash'},
- )
+ if n_reps is not None:
+ fig.add_shape(
+ type='line',
+ x0=n_reps,
+ x1=n_reps,
+ y0=0,
+ y1=1,
+ yref='paper',
+ line={'color': 'red', 'dash': 'dash'},
+ )
# Configure layout
fig.update_layout(
diff --git a/tests/exp_results/overall.csv b/tests/exp_results/overall.csv
index 8d08f20..d7c5d59 100644
--- a/tests/exp_results/overall.csv
+++ b/tests/exp_results/overall.csv
@@ -1,5 +1,5 @@
,arrivals,mean_q_time_nurse,mean_time_with_nurse,mean_nurse_utilisation,mean_nurse_utilisation_tw,mean_nurse_q_length,count_unseen,mean_q_time_unseen
-mean,378.0,2.114432379323691,10.17019147705724,0.6366457796002349,0.639550942417509,0.5312404456976593,0.0,
-std_dev,15.198684153570664,0.8065769414595266,0.4803422428799903,0.022494099125355767,0.024254560268967756,0.19591429007197672,0.0,
-lower_95_ci,359.12834106644124,1.11293482933209,9.573767807256708,0.6087156665442539,0.6094349281392683,0.2879807249816362,0.0,
-upper_95_ci,396.87165893355876,3.115929929315292,10.766615146857774,0.664575892656216,0.6696669566957497,0.7745001664136824,0.0,
+mean,378.0,2.114432379323691,10.17019147705724,0.639550942417509,0.639550942417509,0.5312404456976593,0.0,
+std_dev,15.198684153570664,0.8065769414595266,0.4803422428799903,0.02425456026896803,0.024254560268967756,0.19591429007197672,0.0,
+lower_95_ci,359.12834106644124,1.11293482933209,9.573767807256708,0.6094349281392679,0.6094349281392683,0.2879807249816362,0.0,
+upper_95_ci,396.87165893355876,3.115929929315292,10.766615146857774,0.66966695669575,0.6696669566957497,0.7745001664136824,0.0,
diff --git a/tests/exp_results/replications.csv b/tests/exp_results/replications.csv
index e874a8e..6a86365 100644
--- a/tests/exp_results/replications.csv
+++ b/tests/exp_results/replications.csv
@@ -79,43 +79,43 @@ replications,data,cumulative_mean,stdev,lower_ci,upper_ci,deviation,metric
38,0.629223046319056,0.5091338712423118,0.06973451102588799,0.4862127078380333,0.5320550346465903,0.04501991460192925,mean_q_time_nurse
39,0.5493007154256917,0.5101637903239369,0.06911077717181525,0.48776066801614365,0.5325669126317302,0.04391358762166953,mean_q_time_nurse
40,0.6398790963369202,0.5134066729742615,0.07123539722742252,0.490624487688633,0.5361888582598899,0.04437454066120134,mean_q_time_nurse
-1,0.49958878609667684,0.49958878609667684,,,,,mean_nurse_utilisation
-2,0.5019774882864758,0.5007831371915763,,,,,mean_nurse_utilisation
-3,0.49809386117421955,0.49988671185245737,0.0019588797062941353,0.49502058490133,0.5047528388035847,0.009734459500022913,mean_nurse_utilisation
-4,0.4982487932220592,0.49947723219485785,0.0017968957016028138,0.4966179701515843,0.5023364942381313,0.0057245092648348355,mean_nurse_utilisation
-5,0.49693389991134934,0.4989685657381561,0.0019275200239417237,0.4965752335186108,0.5013618979577015,0.004796559109900422,mean_nurse_utilisation
-6,0.4928539383795759,0.4979494611783928,0.003033761918236169,0.49476572329843654,0.5011331990583491,0.006393696806944939,mean_nurse_utilisation
-7,0.5033858255951804,0.4987260846665053,0.0034484442545104858,0.4955368056594095,0.5019153636736011,0.006394851011710062,mean_nurse_utilisation
-8,0.5034113289416777,0.49931174020090185,0.0035967878792269982,0.4963047502832039,0.5023187301185997,0.006022269607536255,mean_nurse_utilisation
-9,0.5088177294470143,0.5003679612282477,0.004621709466701935,0.4968154008476054,0.50392052160889,0.007099895788535041,mean_nurse_utilisation
-10,0.49994369431171704,0.5003255345365946,0.004359454468057957,0.4972069686765276,0.5034441003966615,0.006233073558708982,mean_nurse_utilisation
-11,0.5097942141014932,0.501186323587949,0.005025424821187901,0.49781019725485304,0.5045624499210449,0.006736269874498095,mean_nurse_utilisation
-12,0.48962260282436343,0.5002226801909836,0.005839717311605493,0.49651230082862413,0.503933059553343,0.007417455284000405,mean_nurse_utilisation
-13,0.5033978686932535,0.500466925460389,0.0056600322314152305,0.4970466022143885,0.5038872487063893,0.006834264307984002,mean_nurse_utilisation
-14,0.4892236002933267,0.4996638308055988,0.006212979495790504,0.4960765632451358,0.5032510983660617,0.007179362081660522,mean_nurse_utilisation
-15,0.5131047752810153,0.5005598937706266,0.006920102913294844,0.49672766851151623,0.504392119029737,0.007655877561909585,mean_nurse_utilisation
-16,0.5009476110379539,0.5005841260998346,0.006686157192364971,0.4970213244225335,0.5041469277771358,0.007117288566578672,mean_nurse_utilisation
-17,0.48669127709236415,0.49976689968763044,0.007298236603839275,0.49601449267789494,0.503519306697366,0.007508314400335199,mean_nurse_utilisation
-18,0.4953608235769949,0.499522117681484,0.007156087901644342,0.4959634788891228,0.5030807564738452,0.007124086534703485,mean_nurse_utilisation
-19,0.4794301592441245,0.49846464618478087,0.008343338059131563,0.4944432859640797,0.502486006405482,0.008067493354805411,mean_nurse_utilisation
-20,0.4916972800926532,0.4981262778801745,0.008260593202462406,0.4942602012558477,0.5019923545045013,0.007761238055497196,mean_nurse_utilisation
-21,0.4926570184731424,0.4978658369560301,0.008139407166869865,0.49416082323390154,0.5015708506781587,0.007441791436787729,mean_nurse_utilisation
-22,0.4895340968911088,0.4974871214985337,0.008139443979842372,0.4938782942099475,0.5010959487871198,0.007254111981262222,mean_nurse_utilisation
-23,0.4893427907640909,0.4971330201622536,0.008131609812209985,0.4936166483910039,0.5006493919335033,0.007073301568465589,mean_nurse_utilisation
-24,0.5013717929835146,0.4973096356964728,0.00799979967396101,0.49393161655470574,0.5006876548382398,0.0067925873526182325,mean_nurse_utilisation
-25,0.4915750702731453,0.4970802530795397,0.007914901852179692,0.49381314216991157,0.5003473639891678,0.006572602491021388,mean_nurse_utilisation
-26,0.5060739795771849,0.4974261656371414,0.007953042651188475,0.49421386219171437,0.5006384690825685,0.006457849762109921,mean_nurse_utilisation
-27,0.5023111150028697,0.49760708968772394,0.007855059885121982,0.49449973148829673,0.5007144478871511,0.0062446019436283185,mean_nurse_utilisation
-28,0.4999763333861762,0.49769170553409725,0.0077212164434829565,0.4946977303526746,0.5006856807155199,0.006015722480666376,mean_nurse_utilisation
-29,0.49232621417270916,0.4975066885906011,0.007647267816605914,0.49459782348214554,0.5004155536990567,0.005846886434222975,mean_nurse_utilisation
-30,0.5046625973715585,0.4977452188832997,0.007626993512415558,0.4948972527007495,0.5005931850658499,0.005721734884645698,mean_nurse_utilisation
-31,0.4957265964121678,0.4976801020293922,0.0075075589939981,0.4949263071579969,0.5004338969007875,0.005533262953785293,mean_nurse_utilisation
-32,0.5035722335646021,0.49786423113986755,0.007458564167237306,0.49517513199766217,0.5005533302820729,0.0054012700130086945,mean_nurse_utilisation
-33,0.503455200045186,0.49803365444002873,0.007405334343055277,0.4954078370591477,0.5006594718209098,0.005272369361932795,mean_nurse_utilisation
-34,0.4996207391525908,0.4980803334021629,0.007297346985441026,0.4955341686694714,0.5006264981348544,0.005111955967624256,mean_nurse_utilisation
-35,0.4960667598321286,0.49802280272873334,0.007197284304297146,0.4955504490679206,0.5004951563895461,0.004964338273802657,mean_nurse_utilisation
-36,0.492980852621203,0.4978827485590797,0.007143320078197071,0.4954657967716168,0.5002997003465427,0.004854459798934277,mean_nurse_utilisation
-37,0.5081176936109076,0.4981593686956156,0.007241601781836921,0.49574489845153186,0.5005738389396994,0.0048467827683454905,mean_nurse_utilisation
-38,0.5021531975212685,0.4982644694541854,0.007172393741642748,0.49590696229548475,0.500621976612886,0.004731437425757259,mean_nurse_utilisation
-39,0.4998568861129943,0.4983053006505651,0.007081982984991065,0.49600958734090245,0.5006010139602277,0.004607041720538521,mean_nurse_utilisation
-40,0.5120208842516952,0.49864819024059337,0.0073192486945782165,0.4963073809463162,0.5009889995348705,0.004694310217285113,mean_nurse_utilisation
+1,0.4996878737378875,0.4996878737378875,,,,,mean_nurse_utilisation
+2,0.502039256887257,0.5008635653125723,,,,,mean_nurse_utilisation
+3,0.49825100523985966,0.4999927119550014,0.0019124349374454884,0.49524196020568306,0.5047434637043198,0.009501641995425542,mean_nurse_utilisation
+4,0.4982487932220592,0.49955673227176584,0.0017884587582845536,0.49671089528803786,0.5024025692554939,0.005696724315547416,mean_nurse_utilisation
+5,0.4971756327666573,0.49908051237074413,0.0018795918488355768,0.49674669084072776,0.5014143339007605,0.004676242554393112,mean_nurse_utilisation
+6,0.49300735898657777,0.49806832014004976,0.0029955785894941723,0.4949246532061252,0.5012119870739743,0.006311718306116261,mean_nurse_utilisation
+7,0.5033858255951804,0.498827963776497,0.003393717492508772,0.4956892985825577,0.5019666289704363,0.006292079478017393,mean_nurse_utilisation
+8,0.503744324527546,0.49944250887037817,0.003590725914847628,0.49644058688172754,0.5024444308590288,0.0060105456290459654,mean_nurse_utilisation
+9,0.5089730393154467,0.500501456697608,0.004623200517851957,0.49694775019359316,0.5040551632016229,0.007100292030042893,mean_nurse_utilisation
+10,0.500258819499236,0.5004771929777708,0.004359470530480651,0.49735861562733885,0.5035957703282028,0.006231207723726386,mean_nurse_utilisation
+11,0.5099498928998579,0.5013383475161424,0.005026126093131122,0.4979617500621445,0.5047149449701402,0.0067351669201589056,mean_nurse_utilisation
+12,0.48963287324037486,0.500362891326495,0.005863751403372907,0.4966372414307641,0.504088541222226,0.007445895689534429,mean_nurse_utilisation
+13,0.503450536555927,0.5006004024979898,0.005679052066978009,0.49716858568038924,0.5040322193155903,0.006855401634668635,mean_nurse_utilisation
+14,0.4892419208048691,0.49978908237705255,0.0062438851944885765,0.49618397039742673,0.5033941943566783,0.007213266769412977,mean_nurse_utilisation
+15,0.513532235246675,0.5007052925683607,0.006985198454402175,0.49683701860014695,0.5045735665365745,0.0077256502490148085,mean_nurse_utilisation
+16,0.5010361213888735,0.5007259693696428,0.006748849637728677,0.4971297612462965,0.504322177492989,0.007181988439452154,mean_nurse_utilisation
+17,0.4868038006734956,0.4999070182698694,0.007355396731860802,0.4961252222341666,0.5036888143055721,0.0075649988847750075,mean_nurse_utilisation
+18,0.4958184693140047,0.4996798766612102,0.007200560967961751,0.4960991219332725,0.5032606313891479,0.007166097526007655,mean_nurse_utilisation
+19,0.4795467775518281,0.4986202398659796,0.008384593137404391,0.49457899533404803,0.5026614843979111,0.008104854574330501,mean_nurse_utilisation
+20,0.49174375431681877,0.49827641558852154,0.008304554832230655,0.4943897642881335,0.5021630668889095,0.007800191176613166,mean_nurse_utilisation
+21,0.4927532298481551,0.49801340674374217,0.008183519787629576,0.49428831319738525,0.501738500290099,0.007479906154963581,mean_nurse_utilisation
+22,0.4895372933604174,0.49762812886268193,0.008188199294378059,0.49399768467864247,0.5012585730467214,0.007295496322397885,mean_nurse_utilisation
+23,0.4893466756619443,0.49726806568004117,0.008184184810000043,0.49372895877486694,0.5008071725852153,0.0071171007137454025,mean_nurse_utilisation
+24,0.5014248673491571,0.4974412657495877,0.008049138284614189,0.49404241273972305,0.5008401187594523,0.006832672003483602,mean_nurse_utilisation
+25,0.49163070655567664,0.49720884338183124,0.007964898299443496,0.4939210949530843,0.5004965918105783,0.0066124093980005,mean_nurse_utilisation
+26,0.5061403611532683,0.4975523632961173,0.007998135716712183,0.49432184636736065,0.5007828802248739,0.006492817976696022,mean_nurse_utilisation
+27,0.5023111150028697,0.49772861335933033,0.00789610751497964,0.4946050172583377,0.5008522094603229,0.006275701290127722,mean_nurse_utilisation
+28,0.5001010382032186,0.4978133428180406,0.007761464033472873,0.494803761250445,0.5008229243856362,0.0060456024552472815,mean_nurse_utilisation
+29,0.49234467530958076,0.4976247680763696,0.007688961982268346,0.49470004335548373,0.5005494927972554,0.005877369673924604,mean_nurse_utilisation
+30,0.5047271337062056,0.4978615135973641,0.007665700504468501,0.49499909398644587,0.5007239332082822,0.0057494293749187604,mean_nurse_utilisation
+31,0.49597569099468386,0.49780068061018085,0.0075444628021010905,0.49503334931268883,0.5005680119076729,0.00555911513439469,mean_nurse_utilisation
+32,0.5037414148141552,0.49798632855405506,0.007495712768106956,0.4952838359129814,0.5006888211951287,0.005426841031802149,mean_nurse_utilisation
+33,0.5035694955439618,0.49815551543253705,0.007441404777499444,0.4955169080317445,0.5007941228333297,0.005296754364952775,mean_nurse_utilisation
+34,0.49975737080141963,0.49820262882573946,0.00733293656044508,0.49564404630328035,0.5007612113481986,0.005135626298258761,mean_nurse_utilisation
+35,0.4960667598321286,0.4981416039973506,0.007233309854016007,0.4956568751272277,0.5006263328674735,0.004987997087944637,mean_nurse_utilisation
+36,0.4930832010715439,0.49800109280496707,0.0071789034320532755,0.4955721013427079,0.5004300842672262,0.004877482193016849,mean_nurse_utilisation
+37,0.5081466031144306,0.4982752957863039,0.007272345558693015,0.495850575056672,0.5007000165159358,0.004866227063907595,mean_nurse_utilisation
+38,0.502386448539968,0.4983834840166635,0.007204332697729616,0.49601547878433105,0.5007514892489959,0.00475137180158496,mean_nurse_utilisation
+39,0.4999737774066358,0.49842426077025254,0.007113466365742342,0.4961183417291758,0.5007301798113293,0.00462641813926395,mean_nurse_utilisation
+40,0.5121114216030034,0.4987664397910713,0.007347611253829817,0.4964165597102864,0.5011163198718562,0.0047113837125233185,mean_nurse_utilisation
diff --git a/tests/exp_results/run.csv b/tests/exp_results/run.csv
index eed24f6..2af693d 100644
--- a/tests/exp_results/run.csv
+++ b/tests/exp_results/run.csv
@@ -1,6 +1,6 @@
run_number,scenario,arrivals,mean_q_time_nurse,mean_time_with_nurse,mean_nurse_utilisation,mean_nurse_utilisation_tw,mean_nurse_q_length,count_unseen,mean_q_time_unseen
-0,0,403,2.0437879039607987,9.57743596674643,0.6409356249106212,0.6423324393767569,0.5490976835308012,0,
-1,0,382,1.2551167218327395,10.208371460384479,0.6447441378534594,0.6467972830722329,0.31963639182673764,0,
-2,0,369,3.392586488165945,10.731510270167153,0.6624456885958806,0.6680103775187581,0.8357927073077738,0,
-3,0,367,2.2271750333429026,10.523888711113448,0.6340615768641413,0.6395727423427239,0.5449154914912302,0,
+0,0,403,2.0437879039607987,9.57743596674643,0.6423324393767568,0.6423324393767569,0.5490976835308012,0,
+1,0,382,1.2551167218327395,10.208371460384479,0.6467972830722333,0.6467972830722329,0.31963639182673764,0,
+2,0,369,3.392586488165945,10.731510270167153,0.6680103775187587,0.6680103775187581,0.8357927073077738,0,
+3,0,367,2.2271750333429026,10.523888711113448,0.6395727423427235,0.6395727423427239,0.5449154914912302,0,
4,0,369,1.6534957493160707,9.8097509768747,0.6010418697770725,0.6010418697770727,0.4067599543317534,0,
diff --git a/tests/test_functionaltest.py b/tests/test_functionaltest.py
index 88b5f80..949b22e 100644
--- a/tests/test_functionaltest.py
+++ b/tests/test_functionaltest.py
@@ -60,7 +60,7 @@ def test_high_demand():
# Check that the utilisation as calculated from total_nurse_time_used
# does not exceed 1 or drop below 0
util = results['run']['mean_nurse_utilisation']
- assert util <= 1, (
+ assert util == pytest.approx(1, abs=1e-9) or util < 1, (
'The run `mean_nurse_utilisation` should not exceed 1, but ' +
f'found utilisation of {util}.'
)
@@ -92,6 +92,45 @@ def test_high_demand():
)
+def test_warmup_high_demand():
+ """
+ Test that utilisation is 1 due to high demand from warm-up (WU), even if
+ none of that resource usage actually reflects use by patients from the data
+ collection (DC) period.
+ """
+ param = Param(number_of_nurses=1,
+ patient_inter=0.1,
+ warm_up_period=100,
+ data_collection_period=10)
+ experiment = Runner(param)
+ results = experiment.run_single(run=0)
+
+ # ONLY REFLECTS DC PATIENTS
+ # Expect these to be NaN, as no patients in data collection period seen,
+ # and we're not interested in the wait times of the warm-up patients
+ assert np.isnan(results['run']['mean_q_time_nurse'])
+ assert np.isnan(results['run']['mean_time_with_nurse'])
+
+ # REFLECTS USE BY WU + DC PATIENTS
+ # Expect these to be 1, as nurses busy for whole time
+ assert results['run']['mean_nurse_utilisation'] == 1
+ assert results['run']['mean_nurse_utilisation_tw'] == 1
+ assert results['interval_audit']['utilisation'][0] == 1
+
+ # REFLECTS USE BY WU + DC PATIENTS
+ # Expect this to be positive and greater than arrivals
+ assert results['run']['mean_nurse_q_length'] > results['run']['arrivals']
+
+ # ONLY REFLECTS DC PATIENTS
+ # Expect this to match arrivals
+ assert results['run']['count_unseen'] == results['run']['arrivals']
+
+ # ONLY REFLECTS DC PATIENTS
+ # Expect this to be positive and close to 5 (as mean of arrivals in 10 min)
+ assert results['run']['mean_q_time_unseen'] > 0
+ assert pytest.approx(results['run']['mean_q_time_unseen'], 0.1) == 5
+
+
def test_warmup_only():
"""
Ensures no results are recorded during the warm-up phase.
diff --git a/tests/test_functionaltest_replications.py b/tests/test_functionaltest_replications.py
index 7648bd9..6265996 100644
--- a/tests/test_functionaltest_replications.py
+++ b/tests/test_functionaltest_replications.py
@@ -91,18 +91,23 @@ def test_consistent_outputs(ci_function):
reps = 20
# Run the manual confidence interval method
- _, man_df = ci_function(
+ man_nreps, man_df = ci_function(
replications=reps, metrics=['mean_time_with_nurse'])
# Run the algorithm
analyser = ReplicationsAlgorithm(initial_replications=reps,
+ look_ahead=0,
replication_budget=reps)
- _, summary_table = analyser.select(runner=Runner(Param()),
- metrics=['mean_time_with_nurse'])
+ alg_nreps, alg_df = analyser.select(runner=Runner(Param()),
+ metrics=['mean_time_with_nurse'])
+
+ # Check that nreps are the same
+ assert man_nreps == alg_nreps
+
# Get first 20 rows (may have more if met precision and went into
# look ahead period beyond budget) and compare dataframes
pd.testing.assert_frame_equal(
- man_df, summary_table.head(20))
+ man_df, alg_df.head(20))
def test_algorithm_initial():
@@ -135,8 +140,8 @@ def test_algorithm_initial():
nrows = len(summary_table[summary_table['metric'] == metric])
assert nrows == initial_replications
- # Check that solution is equal to the initial replications
- assert nreps[metric] == initial_replications
+ # Check that all were solved before initial_replications
+ assert nreps[metric] < initial_replications
def test_algorithm_nosolution():
diff --git a/tests/test_unittest_replications.py b/tests/test_unittest_replications.py
index 3185014..8439167 100644
--- a/tests/test_unittest_replications.py
+++ b/tests/test_unittest_replications.py
@@ -45,9 +45,9 @@ def test_klimit(look_ahead, n, exp):
"""
# Calculate additional replications that would be required
calc = ReplicationsAlgorithm(
- look_ahead=100, initial_replications=100)._klimit()
+ look_ahead=look_ahead, initial_replications=n)._klimit()
# Check that this meets our expected value
- assert calc == 100, (
+ assert calc == exp, (
f'With look_ahead {look_ahead} and n={n}, the additional ' +
f'replications required should be {exp} but _klimit() returned {calc}.'
)