Timings taken via hyperfine on an M3 Macbook pro with 16 gb RAM. Input value of 40 given to each.
Benchmarking Kotlin Benchmark 1: java -jar kotlin/code.jar 40 Time (mean ± σ): 560.4 ms ± 5.4 ms [User: 539.1 ms, System: 21.2 ms] Range (min … max): 556.1 ms … 566.5 ms 3 runs Benchmarking C Benchmark 1: ./c/code 40 Time (mean ± σ): 518.8 ms ± 3.0 ms [User: 506.8 ms, System: 3.4 ms] Range (min … max): 516.0 ms … 521.9 ms 3 runs Benchmarking Rust Benchmark 1: ./rust/target/release/code 40 Time (mean ± σ): 515.1 ms ± 1.5 ms [User: 504.0 ms, System: 3.1 ms] Range (min … max): 514.1 ms … 516.8 ms 3 runs Warning: The first benchmarking run for this command was significantly slower than the rest (516.8 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You are already using the '--warmup' option which helps to fill these caches before the actual benchmark. You can either try to increase the warmup count further or re-run this benchmark on a quiet system in case it was a random outlier. Alternatively, consider using the '--prepare' option to clear the caches before each timing run. Benchmarking Node Benchmark 1: node ./js/code.js 40 Time (mean ± σ): 1.147 s ± 0.061 s [User: 1.135 s, System: 0.006 s] Range (min … max): 1.103 s … 1.217 s 3 runs Benchmarking Bun Benchmark 1: bun ./js/code.js 40 Time (mean ± σ): 911.0 ms ± 161.8 ms [User: 900.0 ms, System: 6.4 ms] Range (min … max): 815.7 ms … 1097.8 ms 3 runs Warning: The first benchmarking run for this command was significantly slower than the rest (1.098 s). This could be caused by (filesystem) caches that were not filled until after the first run. You are already using the '--warmup' option which helps to fill these caches before the actual benchmark. You can either try to increase the warmup count further or re-run this benchmark on a quiet system in case it was a random outlier. Alternatively, consider using the '--prepare' option to clear the caches before each timing run. Benchmarking Bun (Compiled) Benchmark 1: ./js/bun 40 Time (mean ± σ): 821.0 ms ± 3.3 ms [User: 813.2 ms, System: 4.8 ms] Range (min … max): 817.6 ms … 824.1 ms 3 runs Benchmarking Deno Benchmark 1: deno ./js/code.js 40 Time (mean ± σ): 1.153 s ± 0.068 s [User: 1.112 s, System: 0.009 s] Range (min … max): 1.104 s … 1.230 s 3 runs Benchmarking PyPy Benchmark 1: pypy ./py/code.py 40 Time (mean ± σ): 2.303 s ± 0.297 s [User: 2.219 s, System: 0.020 s] Range (min … max): 2.118 s … 2.645 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking CPP Benchmark 1: ./cpp/code 40 Time (mean ± σ): 520.7 ms ± 8.2 ms [User: 511.3 ms, System: 3.4 ms] Range (min … max): 514.1 ms … 529.9 ms 3 runs Benchmarking Go Benchmark 1: ./go/code 40 Time (mean ± σ): 1.054 s ± 0.044 s [User: 1.040 s, System: 0.005 s] Range (min … max): 1.003 s … 1.080 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking Node (jitless) Benchmark 1: node --jitless ./js/code.js 40 Time (mean ± σ): 17.450 s ± 0.047 s [User: 17.403 s, System: 0.034 s] Range (min … max): 17.399 s … 17.493 s 3 runs Benchmarking Bun (jitless) Benchmark 1: bun ./js/code.js 40 Time (mean ± σ): 818.3 ms ± 2.3 ms [User: 808.9 ms, System: 5.2 ms] Range (min … max): 816.9 ms … 821.0 ms 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking Deno (jitless) Benchmark 1: deno --v8-flags=--jitless ./js/code.js 40 Time (mean ± σ): 17.304 s ± 0.147 s [User: 17.232 s, System: 0.035 s] Range (min … max): 17.135 s … 17.395 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking PyPy Benchmark 1: pypy ./py/code.py 40 Time (mean ± σ): 2.135 s ± 0.023 s [User: 2.056 s, System: 0.017 s] Range (min … max): 2.112 s … 2.158 s 3 runs Benchmarking Java Benchmark 1: java jvm.code 40 Time (mean ± σ): 550.9 ms ± 1.3 ms [User: 523.9 ms, System: 19.6 ms] Range (min … max): 550.0 ms … 552.4 ms 3 runs Benchmarking Scala Benchmark 1: ./scala/code 40 Time (mean ± σ): 725.2 ms ± 7.8 ms [User: 762.0 ms, System: 30.4 ms] Range (min … max): 717.0 ms … 732.6 ms 3 runs Benchmarking Ruby Benchmark 1: ruby ./ruby/code.rb 40 Time (mean ± σ): 27.919 s ± 0.108 s [User: 27.838 s, System: 0.059 s] Range (min … max): 27.795 s … 27.990 s 3 runs Benchmarking PHP JIT Benchmark 1: php -dopcache.enable_cli=1 -dopcache.jit=on -dopcache.jit_buffer_size=64M ./php/code.php 40 Time (mean ± σ): 2.412 s ± 0.001 s [User: 2.331 s, System: 0.014 s] Range (min … max): 2.412 s … 2.413 s 3 runs Benchmarking PHP Benchmark 1: php ./php/code.php 40 Time (mean ± σ): 10.759 s ± 0.242 s [User: 10.480 s, System: 0.205 s] Range (min … max): 10.484 s … 10.938 s 3 runs Benchmarking R Benchmark 1: Rscript ./r/code.R 40 Time (mean ± σ): 72.643 s ± 0.074 s [User: 72.134 s, System: 0.378 s] Range (min … max): 72.574 s … 72.721 s 3 runs Benchmarking Python Benchmark 1: python3.13 ./py/code.py 40 Time (mean ± σ): 31.589 s ± 0.321 s [User: 31.513 s, System: 0.021 s] Range (min … max): 31.269 s … 31.910 s 3 runs Benchmarking Dart Benchmark 1: ./dart/code 40 Time (mean ± σ): 529.5 ms ± 2.1 ms [User: 514.4 ms, System: 4.4 ms] Range (min … max): 527.1 ms … 531.0 ms 3 runs Benchmarking Common Lisp Benchmark 1: common-lisp/code 40 Time (mean ± σ): 573.9 ms ± 7.1 ms [User: 560.0 ms, System: 8.5 ms] Range (min … max): 568.0 ms … 581.8 ms 3 runs Benchmarking Zig Benchmark 1: ./zig/code 40 Time (mean ± σ): 510.5 ms ± 1.6 ms [User: 500.3 ms, System: 2.3 ms] Range (min … max): 508.7 ms … 511.5 ms 3 runs Benchmarking Dart Benchmark 1: ./dart/code 40 Time (mean ± σ): 2.386 s ± 0.026 s [User: 2.369 s, System: 0.005 s] Range (min … max): 2.363 s … 2.415 s 3 runs Benchmarking Inko Benchmark 1: ./inko/code 40 Time (mean ± σ): 2.462 s ± 0.005 s [User: 2.446 s, System: 0.075 s] Range (min … max): 2.457 s … 2.467 s 3 runs Benchmarking Nim Benchmark 1: ./nim/code 40 Time (mean ± σ): 545.8 ms ± 3.9 ms [User: 536.4 ms, System: 2.0 ms] Range (min … max): 542.1 ms … 549.8 ms 3 runs Benchmarking Free Pascal Benchmark 1: ./fpc/code 40 Time (mean ± σ): 2.137 s ± 0.003 s [User: 2.127 s, System: 0.003 s] Range (min … max): 2.134 s … 2.140 s 3 runs Benchmarking Crystal Benchmark 1: ./crystal/code 40 Time (mean ± σ): 541.3 ms ± 1.5 ms [User: 533.2 ms, System: 1.8 ms] Range (min … max): 539.6 ms … 542.2 ms 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking Odin Benchmark 1: ./odin/code 40 Time (mean ± σ): 527.5 ms ± 22.0 ms [User: 520.3 ms, System: 1.8 ms] Range (min … max): 506.9 ms … 550.8 ms 3 runs Benchmarking Objective-C Benchmark 1: ./objc/code 40 Time (mean ± σ): 15.613 s ± 0.088 s [User: 15.596 s, System: 0.009 s] Range (min … max): 15.558 s … 15.715 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking Fortran Benchmark 1: ./fortran/code 40 Time (mean ± σ): 524.1 ms ± 7.3 ms [User: 515.9 ms, System: 1.8 ms] Range (min … max): 519.6 ms … 532.5 ms 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking LuaJIT Benchmark 1: luajit ./lua/code 40 Time (mean ± σ): 805.6 ms ± 1.4 ms [User: 764.2 ms, System: 2.2 ms] Range (min … max): 804.4 ms … 807.1 ms 3 runs Benchmarking Lua Benchmark 1: lua ./lua/code.lua 40 Time (mean ± σ): 45.941 s ± 0.631 s [User: 45.816 s, System: 0.075 s] Range (min … max): 45.214 s … 46.348 s 3 runs Benchmarking Swift Benchmark 1: ./swift/code 40 Time (mean ± σ): 541.6 ms ± 2.3 ms [User: 535.2 ms, System: 2.1 ms] Range (min … max): 539.2 ms … 543.9 ms 3 runs Benchmarking Julia Benchmark 1: julia ./julia/code.jl 40 Time (mean ± σ): 1.318 s ± 0.373 s [User: 2.057 s, System: 0.028 s] Range (min … max): 1.096 s … 1.749 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options. Benchmarking Elixir Benchmark 1: elixir elixir/bench.exs 40 Time (mean ± σ): 2.871 s ± 0.164 s [User: 2.620 s, System: 0.209 s] Range (min … max): 2.761 s … 3.060 s 3 runs Benchmarking C# Benchmark 1: ./csharp/code/csharp 40 Time (mean ± σ): 599.3 ms ± 7.3 ms [User: 582.4 ms, System: 10.0 ms] Range (min … max): 594.9 ms … 607.8 ms 3 runs Warning: The first benchmarking run for this command was significantly slower than the rest (607.8 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You are already using the '--warmup' option which helps to fill these caches before the actual benchmark. You can either try to increase the warmup count further or re-run this benchmark on a quiet system in case it was a random outlier. Alternatively, consider using the '--prepare' option to clear the caches before each timing run. Benchmarking Ruby Benchmark 1: ruby ./ruby/code.rb 40 Time (mean ± σ): 27.919 s ± 0.049 s [User: 27.882 s, System: 0.023 s] Range (min … max): 27.884 s … 27.975 s 3 runs Benchmarking Ruby YJIT Benchmark 1: miniruby --yjit ./ruby/code.rb 40 Time (mean ± σ): 11.015 s ± 0.081 s [User: 10.992 s, System: 0.010 s] Range (min … max): 10.965 s … 11.108 s 3 runs Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
You can find all of the code and the compile / run / cleanup scripts at the GitHub repository.