issue_comments: 1074454687
This data as json
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/datasette/issues/1679#issuecomment-1074454687 | https://api.github.com/repos/simonw/datasette/issues/1679 | 1074454687 | IC_kwDOBm6k_c5ACuCf | 9599 | 2022-03-21T21:48:02Z | 2022-03-21T21:48:02Z | OWNER | Here's another microbenchmark that measures how many nanoseconds it takes to run 1,000 vmops: ```python import sqlite3 import time db = sqlite3.connect(":memory:") i = 0 out = [] def count(): global i i += 1000 out.append(((i, time.perf_counter_ns()))) db.set_progress_handler(count, 1000) print("Start:", time.perf_counter_ns()) all = db.execute(""" with recursive counter(x) as ( select 0 union select x + 1 from counter ) select * from counter limit 10000; """).fetchall() print("End:", time.perf_counter_ns()) print() print("So how long does it take to execute 1000 ops?") prev_time_ns = None
for i, time_ns in out:
if prev_time_ns is not None:
print(time_ns - prev_time_ns, "ns")
prev_time_ns = time_ns
So how long does it take to execute 1000 ops? 47290 ns 49573 ns 48226 ns 45674 ns 53238 ns 47313 ns 52346 ns 48689 ns 47092 ns 87596 ns 69999 ns 52522 ns 52809 ns 53259 ns 52478 ns 53478 ns 65812 ns ``` 87596ns is 0.087596ms - so even a measure rate of every 1000 ops is easily finely grained enough to capture differences of less than 0.1ms. If anything I could bump that default 1000 up - and I can definitely eliminate the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1175854982 |