home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3,044 rows sorted by updated_at descending

✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: author_association, draft, state_reason, created_at (date), updated_at (date), closed_at (date)

repo 16

  • datasette 2,098
  • sqlite-utils 604
  • github-to-sqlite 79
  • twitter-to-sqlite 73
  • dogsheep-photos 40
  • dogsheep-beta 38
  • healthkit-to-sqlite 24
  • evernote-to-sqlite 16
  • swarm-to-sqlite 15
  • apple-notes-to-sqlite 14
  • google-takeout-to-sqlite 13
  • pocket-to-sqlite 12
  • dogsheep.github.io 8
  • hacker-news-to-sqlite 6
  • inaturalist-to-sqlite 2
  • genome-to-sqlite 2

type 2

  • issue 2,436
  • pull 608

state 2

  • closed 2,260
  • open 784
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1570375808 I_kwDODFdgUs5dmgiA 79 Deploy demo job is failing due to rate limit simonw 9599 open 0     2 2023-02-03T20:05:01Z 2023-12-08T14:50:15Z   MEMBER  

https://github.com/dogsheep/github-to-sqlite/actions/runs/4080058087/jobs/7032116511

github-to-sqlite 207052882 issue    
{
    "url": "https://api.github.com/repos/dogsheep/github-to-sqlite/issues/79/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1983600865 PR_kwDOBm6k_c5e7WH7 2206 Bump the python-packages group with 1 update dependabot[bot] 49699333 open 0     1 2023-11-08T13:18:56Z 2023-12-08T13:46:24Z   CONTRIBUTOR simonw/datasette/pulls/2206

Bumps the python-packages group with 1 update: black.

Release notes

Sourced from black's releases.

23.11.0

Highlights

  • Support formatting ranges of lines with the new --line-ranges command-line option (#4020)

Stable style

  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • Fix crash on formatting code like await (a ** b) (#3994)
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and fixes a crash (#4019)

Preview style

  • Multiline dicts and lists that are the sole argument to a function are now indented less (#3964)
  • Multiline unpacked dicts and lists as the sole argument to a function are now also indented less (#3992)
  • In f-string debug expressions, quote types that are visible in the final string are now preserved (#4005)
  • Fix a bug where long case blocks were not split into multiple lines. Also enable general trailing comma rules on case blocks (#4024)
  • Keep requiring two empty lines between module-level docstring and first function or class definition (#4028)
  • Add support for single-line format skip with other comments on the same line (#3959)

Configuration

  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • Fix a bug in the matching of absolute path names in --include (#3976)

Performance

  • Fix mypyc builds on arm64 on macOS (#4017)

Integrations

  • Black's pre-commit integration will now run only on git hooks appropriate for a code formatter (#3940)

23.10.1

Highlights

  • Maintanence release to get a fix out for GitHub Action edge case (#3957)

Preview style

... (truncated)

Changelog

Sourced from black's changelog.

23.11.0

Highlights

  • Support formatting ranges of lines with the new --line-ranges command-line option (#4020)

Stable style

  • Fix crash on formatting bytes strings that look like docstrings (#4003)
  • Fix crash when whitespace followed a backslash before newline in a docstring (#4008)
  • Fix standalone comments inside complex blocks crashing Black (#4016)
  • Fix crash on formatting code like await (a ** b) (#3994)
  • No longer treat leading f-strings as docstrings. This matches Python's behaviour and fixes a crash (#4019)

Preview style

  • Multiline dicts and lists that are the sole argument to a function are now indented less (#3964)
  • Multiline unpacked dicts and lists as the sole argument to a function are now also indented less (#3992)
  • In f-string debug expressions, quote types that are visible in the final string are now preserved (#4005)
  • Fix a bug where long case blocks were not split into multiple lines. Also enable general trailing comma rules on case blocks (#4024)
  • Keep requiring two empty lines between module-level docstring and first function or class definition (#4028)
  • Add support for single-line format skip with other comments on the same line (#3959)

Configuration

  • Consistently apply force exclusion logic before resolving symlinks (#4015)
  • Fix a bug in the matching of absolute path names in --include (#3976)

Performance

  • Fix mypyc builds on arm64 on macOS (#4017)

Integrations

  • Black's pre-commit integration will now run only on git hooks appropriate for a code formatter (#3940)

23.10.1

Highlights

  • Maintenance release to get a fix out for GitHub Action edge case (#3957)

... (truncated)

Commits
  • 2a1c67e Prepare release 23.11.0 (#4032)
  • 72e7a2e Remove redundant condition from has_magic_trailing_comma (#4023)
  • 1a7d9c2 Preserve visible quote types for f-string debug expressions (#4005)
  • f4c7be5 docs: fix minor typo (#4030)
  • 2e4fac9 Apply force exclude logic before symlink resolution (#4015)
  • 66008fd [563] Fix standalone comments inside complex blocks crashing Black (#4016)
  • 50ed622 Fix long case blocks not split into multiple lines (#4024)
  • 46be1f8 Support formatting specified lines (#4020)
  • ecbd9e8 Fix crash with f-string docstrings (#4019)
  • e808e61 Preview: Keep requiring two empty lines between module-level docstring and fi...
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2206.org.readthedocs.build/en/2206/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2206/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1066474200 I_kwDOCGYnMM4_kRrY 344 Support STRICT tables simonw 9599 closed 0     14 2021-11-29T20:32:23Z 2023-12-08T05:22:39Z 2023-12-08T05:22:39Z OWNER  

New in SQLite 3.37.0, released a few days ago: https://www.sqlite.org/stricttables.html

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/344/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
2001006157 PR_kwDOCGYnMM5f2OZC 604 Add more STRICT table support tkhattra 16437338 closed 0     4 2023-11-19T19:38:53Z 2023-12-08T05:17:20Z 2023-12-08T05:05:27Z CONTRIBUTOR simonw/sqlite-utils/pulls/604
  • https://github.com/simonw/sqlite-utils/issues/344#issuecomment-982014776

Make table.transform() preserve STRICT mode.


:books: Documentation preview :books:: https://sqlite-utils--604.org.readthedocs.build/en/604/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/604/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1988525411 I_kwDOCGYnMM52hn1j 603 Pyhton 3.12 Bug report constantinedev 1324252 open 0     1 2023-11-10T22:57:48Z 2023-12-08T05:10:31Z   NONE  

I start with new python3 verison 3.12.0 Also have the error where connect DataBase

Traceback (most recent call last): File "/home/t/Development/python/FKPJ/ClinicSYS/run.py", line 1, in <module> import re, os, io, json, sqlite_utils, requests, pytz, logging File "/home/t/.local/lib/python3.12/site-packages/sqlite_utils/__init__.py", line 1, in <module> from .db import Database File "/home/t/.local/lib/python3.12/site-packages/sqlite_utils/db.py", line 277, in <module> class Database: File "/home/t/.local/lib/python3.12/site-packages/sqlite_utils/db.py", line 306, in Database filename_or_conn: Optional[Union[str, pathlib.Path, sqlite3.Connection]] = None, ^^^^^^^^^^^^^^^^^^ This bug come from sqlite-utils since's v3.33. Anyone get the same ?

As well now of the resolved plan just keep the sqlite-utils version in python3.12 with v3.32.1 [tested] but where are the sqlite3.Connection problem....

This won't happen on python version down to 3.11[tested] Just the python3.12.0, I have test this error are come from the sqlite3 connection The error say from sqlite_utils and with the sqlite3 Connection, what can I do.

Let fix together.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/603/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
2007893839 I_kwDOCGYnMM53rgdP 605 Insert fails with `Error: Python int too large to convert to SQLite INTEGER`; can we use `NUMERIC` here? Zac-HD 12229877 closed 0     1 2023-11-23T10:19:46Z 2023-12-08T05:07:54Z 2023-12-08T05:07:54Z NONE  

I'm currently working on a new feature for Hypothesis, where we can dump a tidy jsonlines table of all the test cases we tried - including arguments, outcomes, timings, coverage, etc. Exploring this seems like a perfect cases for sqlite-utils and datasette, but I pretty quickly ran into an integer overflow problem and don't want to recommend that experience to my users.

I originally went to report this as a bug... and then found https://github.com/simonw/sqlite-utils/issues/309#issuecomment-895581038 almost exactly matched my repro 😅

https://github.com/simonw/sqlite-utils/issues/110#issuecomment-626391063 suggests that using NUMERIC would avoid this overflow error, although "If the TEXT value is a well-formed integer literal that is too large to fit in a 64-bit signed integer, it is converted to REAL." suggests that this would come at the cost of rounding to the nearest float value. Maybe I should just convert large integers to float before writing out my json?

After a bit more hacking, "manually cast large integers to float" seems like a decent solution for my particular case, but having written it up I thought I might as well post this issue anyway - I hope it's useful feedback, and won't mind at all if you close as wontfix if it's not.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/605/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
2029908157 I_kwDOBm6k_c54_fC9 2214 CSV export fails for some `text` foreign key references precipice 2874 open 0     1 2023-12-07T05:04:34Z 2023-12-07T07:36:34Z   NONE  

I'm starting this issue without a clear reproduction in case someone else has seen this behavior, and to use the issue as a notebook for research.

I'm using Datasette with the SWITRS data set, which is a California Highway Patrol collection of traffic incident data from the past decade or so. I receive data from them in CSV and want to work with it in Datasette, then export it to CSV for mapping in Felt.com.

Their data makes extensive use of codes for incident column data (1 for Monday and so on), some of it integer codes and some of it letter/text codes. The text codes are sometimes blank or -. During import, I'm creating lookup tables for foreign key references to make the Datasette UI presentation of the data easier to read.

If I import the data and set up the integer foreign keys, everything works fine, but if I set up the text foreign keys, CSV export starts to fail.

The foreign key configuration is as follows:

```

Some tables use integer ids, like sensible tables do. Let's import them first

since we favor them.

for TABLE in DAY_OF_WEEK CHP_SHIFT POPULATION SPECIAL_COND BEAT_TYPE COLLISION_SEVERITY do sqlite-utils create-table records.db $TABLE id integer name text --pk=id sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE id sqlite-utils create-index records.db collisions $TABLE done

Other tables use letter keys, like they were raised by WOLVES. Let's put them

at the end of the import queue.

for TABLE in WEATHER_1 WEATHER_2 LOCATION_TYPE RAMP_INTERSECTION SIDE_OF_HWY \ PRIMARY_COLL_FACTOR PCF_CODE_OF_VIOL PCF_VIOL_CATEGORY TYPE_OF_COLLISION MVIW \ PED_ACTION ROAD_SURFACE ROAD_COND_1 ROAD_COND_2 LIGHTING CONTROL_DEVICE \ STWD_VEHTYPE_AT_FAULT CHP_VEHTYPE_AT_FAULT PRIMARY_RAMP SECONDARY_RAMP do sqlite-utils create-table records.db $TABLE key text name text --pk=key sqlite-utils insert records.db $TABLE lookup-tables/$TABLE.csv --csv sqlite-utils add-foreign-key records.db collisions $TABLE $TABLE key sqlite-utils create-index records.db collisions $TABLE done ```

You can see the full code and import script here: https://github.com/radical-bike-lobby/switrs-db

If I run this code and then hit the CSV export link in the Datasette interface (the simple link or the "advanced" dialog), export fails after a small number of CSV rows are written. I am not seeing any detailed error messages but this appears in the logging output:

``` INFO: 127.0.0.1:57885 - "GET /records/collisions.csv?_facet=PRIMARY_RD&PRIMARY_RD=ASHBY+AV&_labels=on&_size=max HTTP/1.1" 200 OK Caught this error:

```

(No other output follows error: other than a blank line.)

I've stared at the rows directly after the error occurs and can't yet see what is causing the problem. I'm going to set up a development environment and see if I get any more detailed error output, and then stare more at some problematic lines to see if I can get a simple reproduction.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2214/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
2029161033 I_kwDOCGYnMM548opJ 606 str and int as aliases for text and integer simonw 9599 closed 0     2 2023-12-06T18:35:49Z 2023-12-06T19:44:04Z 2023-12-06T18:49:32Z OWNER  

I keep making this mistake: bash sqlite-utils add-column content.db assets _since int ``` Usage: sqlite-utils add-column [OPTIONS] PATH TABLE COL_NAME [[integer|float|b lob|text|INTEGER|FLOAT|BLOB|TEXT]] Try 'sqlite-utils add-column -h' for help.

Error: Invalid value for '[[integer|float|blob|text|INTEGER|FLOAT|BLOB|TEXT]]': 'int' is not one of 'integer', 'float', 'blob', 'text', 'INTEGER', 'FLOAT', 'BLOB', 'TEXT'. ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/606/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
2028698018 I_kwDOBm6k_c5463mi 2213 feature request: gzip compression of database downloads fgregg 536941 open 0     1 2023-12-06T14:35:03Z 2023-12-06T15:05:46Z   CONTRIBUTOR  

At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed.

this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application"

(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
2023057255 I_kwDOBm6k_c54lWdn 2212 Can't filter with numbers fzakaria 605070 open 0     0 2023-12-04T05:26:29Z 2023-12-04T05:26:29Z   NONE  

I have a schema that uses numbers for a column (actually it's a boolean 1 or 0 but SQLite doesn't have Boolean). I can't seem to get the facet to work or even filtering on this column.

My guess is that Datasette is "stringifying" the number and it's not matching? Example: https://debian-sqlelf.fly.dev/debian/elf_symbols?_sort_desc=name&_facet=exported&exported=0

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2212/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
2019811176 I_kwDOBm6k_c54Y99o 2211 Unreachable exception handlers for `sqlite3.OperationalError` mattparmett 1214074 open 0     0 2023-12-01T00:50:22Z 2023-12-01T00:50:22Z   NONE  

There are several places where sqlite3.OperationalError is caught as part of an exception handler which catches multiple exceptions, but is then caught again immediately afterwards by a dedicated exception handler.

Because the exception will be caught by the first handler, the logic in the second handler is unreachable and will never be executed. If this is intended behavior, the second handler can be removed. If this is not intended, and the second handler should be the one that catches this exception, then sqlite3.OperationalError should be removed from the tuple of exceptions in the first handler.

This issue was found via a CodeQL query on the repository, and I've listed the occurrences found by the query below. There may be other instances of this issue in the code that were not surfaced by the query. I'd be happy to share the query if others would like to view or run it.

One example:

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/views/database.py#L534-L537

Other instances:

https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L266-L270 https://github.com/simonw/datasette/blob/main/datasette/views/base.py#L452-L456

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2211/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
564833696 MDU6SXNzdWU1NjQ4MzM2OTY= 670 Prototoype for Datasette on PostgreSQL simonw 9599 open 0     15 2020-02-13T17:17:55Z 2023-11-17T15:32:21Z   OWNER  

I thought this would never happen, but now that I'm deep in the weeds of running SQLite in production for Datasette Cloud I'm starting to reconsider my policy of only supporting SQLite.

Some of the factors making me think PostgreSQL support could be worth the effort: - Serverless. I'm getting increasingly excited about writable-database use-cases for Datasette. If it could talk to PostgreSQL then users could easily deploy it on Heroku or other serverless providers that can talk to a managed RDS-style PostgreSQL. - Existing databases. Plenty of organizations have PostgreSQL databases. They can export to SQLite using db-to-sqlite but that's a pretty big barrier to getting started - being able to run datasette postgresql://connection-string and start trying it out would be a massively better experience. - Data size. I keep running into use-cases where I want to run Datasette against many GBs of data. SQLite can do this but PostgreSQL is much more optimized for large data, especially given the existence of tools like Citus. - Marketing. Convincing people to trust their data to SQLite is potentially a big barrier to adoption. Even if I've convinced myself it's trustworthy I still have to convince everyone else. - It might not be that hard? If this required a ground-up rewrite it wouldn't be worth the effort, but I have a hunch that it may not be too hard - most of the SQL in Datasette should work on both databases since it's almost all portable SELECT statements. If Datasette did DML this would be a lot harder, but it doesn't. - Plugins! This feels like a natural surface for a plugin - at which point people could add MySQL support and suchlike in the future.

The above reasons feel strong enough to justify a prototype.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/670/reactions",
    "total_count": 19,
    "+1": 14,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 5,
    "rocket": 0,
    "eyes": 0
}
   
1994861266 PR_kwDOBm6k_c5fhgOS 2209 Fix query for suggested facets with column named value rgieseke 198537 open 0     3 2023-11-15T14:13:30Z 2023-11-15T15:31:12Z   CONTRIBUTOR simonw/datasette/pulls/2209

See discussion in https://github.com/simonw/datasette/issues/2208


:books: Documentation preview :books:: https://datasette--2209.org.readthedocs.build/en/2209/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2209/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1994857251 I_kwDOBm6k_c525xsj 2208 No suggested facets when a column named 'value' is included rgieseke 198537 open 0     1 2023-11-15T14:11:17Z 2023-11-15T14:18:59Z   CONTRIBUTOR  

When a column named 'value' is included there are no suggested facets is shown as the query uses an alias of 'value'.

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L168-L174

Currently the following is shown (from https://latest.datasette.io/fixtures/facetable)

When I add a column named 'value' only the JSON facets are processed.

I think that not using aliases could be a solution (except if someone wants to use a column named count(*) though this seems to be unlikely). I'll open a PR with that.

There is also a TODO with a similar question in the same file. I have not looked into that yet.

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/facets.py#L512

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2208/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1994845152 I_kwDOBm6k_c525uvg 2207 ModuleNotFoundError: No module named 'click_default_group honzajavorek 283441 open 0     0 2023-11-15T14:04:32Z 2023-11-15T14:04:32Z   NONE  

No matter what I do, I'm getting this error:

$ datasette Traceback (most recent call last): File "/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/bin/datasette", line 5, in <module> from datasette.cli import cli File "/Users/honza/Library/Caches/pypoetry/virtualenvs/juniorguru-Lgaxwd2n-py3.11/lib/python3.11/site-packages/datasette/cli.py", line 6, in <module> from click_default_group import DefaultGroup ModuleNotFoundError: No module named 'click_default_group'

I have datasette in my dependencies like this:

toml [tool.poetry.group.dev.dependencies] datasette = {version = "1.0a7", allow-prereleases = true}

I had the latest regular version (not pre-release) there originally, but the result was the same:

toml [tool.poetry.group.dev.dependencies] datasette = "0.64.5"

Full pyproject.toml is at https://github.com/honzajavorek/junior.guru/ Previously datasette worked for me, but I guess something had to upgrade and now I can't even launch it.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2207/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1959278971 PR_kwDOBm6k_c5dpF-F 2202 Bump the python-packages group with 1 update dependabot[bot] 49699333 closed 0     2 2023-10-24T13:40:21Z 2023-11-08T13:19:03Z 2023-11-08T13:19:01Z CONTRIBUTOR simonw/datasette/pulls/2202

Bumps the python-packages group with 1 update: black.

Release notes

Sourced from black's releases.

23.10.1

Highlights

  • Maintanence release to get a fix out for GitHub Action edge case (#3957)

Preview style

  • Fix merging implicit multiline strings that have inline comments (#3956)
  • Allow empty first line after block open before a comment or compound statement (#3967)

Packaging

  • Change Dockerfile to hatch + compile black (#3965)

Integrations

  • The summary output for GitHub workflows is now suppressible using the summary parameter. (#3958)
  • Fix the action failing when Black check doesn't pass (#3957)

Documentation

  • It is known Windows documentation CI is broken psf/black#3968

23.10.0

Stable style

  • Fix comments getting removed from inside parenthesized strings (#3909)

Preview style

  • Fix long lines with power operators getting split before the line length (#3942)
  • Long type hints are now wrapped in parentheses and properly indented when split across multiple lines (#3899)
  • Magic trailing commas are now respected in return types. (#3916)
  • Require one empty line after module-level docstrings. (#3932)
  • Treat raw triple-quoted strings as docstrings (#3947)

Configuration

  • Fix cache versioning logic when BLACK_CACHE_DIR is set (#3937)

Parser

  • Fix bug where attributes named type were not acccepted inside match statements (#3950)
  • Add support for PEP 695 type aliases containing lambdas and other unusual expressions (#3949)

... (truncated)

Changelog

Sourced from black's changelog.

23.10.1

Highlights

  • Maintanence release to get a fix out for GitHub Action edge case (#3957)

Preview style

  • Fix merging implicit multiline strings that have inline comments (#3956)
  • Allow empty first line after block open before a comment or compound statement (#3967)

Packaging

  • Change Dockerfile to hatch + compile black (#3965)

Integrations

  • The summary output for GitHub workflows is now suppressible using the summary parameter. (#3958)
  • Fix the action failing when Black check doesn't pass (#3957)

Documentation

  • It is known Windows documentation CI is broken psf/black#3968

23.10.0

Stable style

  • Fix comments getting removed from inside parenthesized strings (#3909)

Preview style

  • Fix long lines with power operators getting split before the line length (#3942)
  • Long type hints are now wrapped in parentheses and properly indented when split across multiple lines (#3899)
  • Magic trailing commas are now respected in return types. (#3916)
  • Require one empty line after module-level docstrings. (#3932)
  • Treat raw triple-quoted strings as docstrings (#3947)

Configuration

  • Fix cache versioning logic when BLACK_CACHE_DIR is set (#3937)

Parser

  • Fix bug where attributes named type were not accepted inside match statements (#3950)
  • Add support for PEP 695 type aliases containing lambdas and other unusual expressions

... (truncated)

Commits
  • 744d23b Prepare release 23.10.1 (#3969)
  • 8de4be5 Fix CI failing (#3957)
  • c0adca3 docs: specifies the use of the .git-blame-ignore-revs file (#3961)
  • a7643fa Add summary parameter to action (#3958)
  • d291c23 Move Docker image to hatch + compile (#3965)
  • 7f1c578 Bump peter-evans/create-or-update-comment from 3.0.2 to 3.1.0 (#3966)
  • 2db5ab0 Allow empty line after block open before a comment or compound statement (#3967)
  • 0a37888 Fix typos in CHANGES.md (#3963)
  • 882d879 Fix merging implicit multiline strings that have inline comments (#3956)
  • 9edba85 Prepare release 23.10.0 (#3951)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2202.org.readthedocs.build/en/2202/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2202/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1978603203 I_kwDOCGYnMM517xbD 602 `sqlite-utils transform` removes the `AUTOINCREMENT` keyword ArsTapatun 4472046 open 0     0 2023-11-06T08:48:43Z 2023-11-06T08:48:43Z   NONE  

Context

We ran into this bug randomly, noticing that deleted ROWID would get reused after migrating the DB. Using transform to change any column in the table will also unexpectedly strip away the AUTOINCREMENT keyword from the primary key definition, even if it was not the transformation target.

Reproducible example

Original database

```sql $ sqlite3 test.db << EOF CREATE TABLE mytable ( col1 INTEGER PRIMARY KEY AUTOINCREMENT, col2 TEXT NOT NULL ) EOF

$ sqlite3 test.db ".schema mytable" CREATE TABLE mytable ( col1 INTEGER PRIMARY KEY AUTOINCREMENT, col2 TEXT NOT NULL ); ```

Modified database after sqlite-utils

```sql $ sqlite-utils transform test.db mytable --rename col2 renamedcol2

$ sqlite3 test.db "SELECT sql FROM sqlite_master WHERE name = 'mytable';" CREATE TABLE IF NOT EXISTS "mytable" ( [col1] INTEGER PRIMARY KEY, [renamedcol2] TEXT NOT NULL ); ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/602/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1978023780 I_kwDOBm6k_c515j9k 2205 request.post_vars() method obliterates form keys with multiple values simonw 9599 open 0   Datasette 1.0a-next 8755003 3 2023-11-05T23:25:08Z 2023-11-06T04:10:34Z   OWNER  

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L137-L139

In GET requests you can do ?foo=1&foo=2 - you can do the same in POST requests, but the dict() call here eliminates those duplicates.

You can't even try calling post_body() and implement your own custom parsing because of: - #2204

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2205/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1978022687 I_kwDOBm6k_c515jsf 2204 request.post_body() can only be called once simonw 9599 open 0     0 2023-11-05T23:22:03Z 2023-11-05T23:23:23Z   OWNER  

This code here:

https://github.com/simonw/datasette/blob/452a587e236ef642cbc6ae345b58767ea8420cb5/datasette/utils/asgi.py#L127-L135

It consumes the messages, which means if you try to call it a second time you won't be able to get at the body.

This is efficient - we don't end up with a request object property with potentially megabytes of content that we never look at again - but it's inconvenient for cases like middleware or functions where we don't know if the body has been consumed yet or not.

Potential solution: set request._body the first time it is called, and return that on subsequent calls.

Potential optimization: only do this for bodies that are shorter than a certain threshold - maybe 1MB - and raise an exception if you attempt to call post_body() multiple times against one of those larger bodies.

I'm a bit nervous about that option though, since it could result in errors that don't show up in testing but do show up in production.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2204/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
959137143 MDU6SXNzdWU5NTkxMzcxNDM= 1415 feature request: document minimum permissions for service account for cloudrun fgregg 536941 open 0     4 2021-08-03T13:48:43Z 2023-11-05T16:46:59Z   CONTRIBUTOR  

Thanks again for such a powerful project.

For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions.

It would be great to document what those minimum permission that need to be set in the IAM.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1977726056 I_kwDOBm6k_c514bRo 2203 custom plugin not seen as sql function LyzardKing 7113541 open 0     0 2023-11-05T10:30:19Z 2023-11-05T10:30:19Z   NONE  

Hi, I'm not sure if this is the right repo for this issue.

I'm using datasette with the parquet (to read a duckdb), and jellyfish plugins. Both work perfectly.

Now I need to create a simple plugin that uses the python rouge package and returns a similarity score (similarly to how the jellyfish plugin works). If I create a custom plugin, even the example hello_world one, copied directly from the tutorial, I get the following error: duckdb.duckdb.CatalogException: Catalog Error: Scalar Function with name hello_world does not exist!

Since the jellyfish plugin doesn't do anything more complex, I'm wondering if there is some other kind of issue with my setup.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2203/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1977155641 I_kwDOCGYnMM512QA5 601 Move plugin directory into documentation simonw 9599 open 0     0 2023-11-04T04:07:52Z 2023-11-04T04:07:52Z   OWNER  

https://github.com/simonw/sqlite-utils-plugins should be in the official documentation.

I can use the same pattern as https://llm.datasette.io/en/stable/plugins/directory.html

https://til.simonwillison.net/readthedocs/stable-docs

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/601/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1976986318 I_kwDOCGYnMM511mrO 599 Cannot find spatialite on arm64 linux MikeCoats 37802088 closed 0     1 2023-11-03T22:05:51Z 2023-11-04T01:06:31Z 2023-11-04T00:33:28Z CONTRIBUTOR  

Initially, I found an issue in datasette where it wouldn’t find spatialite when running on my Radxa Rock 5B - an RK3588 powered SBC, running the arm64 build of Debian Bullseye. I confirmed the same behaviour on my Raspberry Pi 4 - a BCM2711 powered SBC, running the arm64 build of Debian Bookworm.

$ datasette --load-extension=spatialite example.db Error: Could not find SpatiaLite extension

I did some digging and realised the issue originates in this project. Even with the libsqlite3-mod-spatialite package installed, pytest skips all of the GIS tests in the project.

``` $ apt list --installed | grep spatial […] libsqlite3-mod-spatialite/stable,now 5.0.1-3 arm64 [installed]

$ ls -l /usr/lib//spatial* lrwxrwxrwx 1 root root 23 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so -> mod_spatialite.so.7.1.0 lrwxrwxrwx 1 root root 23 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7 -> mod_spatialite.so.7.1.0 -rw-r--r-- 1 root root 7348584 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7.1.0 ```

$ pytest tests/test_get.py ...... [ 73%] tests/test_gis.py ssssssssssss [ 75%] tests/test_hypothesis.py .... [ 75%]

I tracked the issue down to the find_sqlite() function in the utils.py file. The SPATIALITE_PATHS array doesn’t have an entry for the location of this module on arm64 linux.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/599/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1884335789 PR_kwDOCGYnMM5Zs0KB 591 Test against Python 3.12 preview simonw 9599 closed 0     3 2023-09-06T16:10:00Z 2023-11-04T00:58:03Z 2023-11-04T00:58:02Z OWNER simonw/sqlite-utils/pulls/591

https://dev.to/hugovk/help-test-python-312-beta-1508/


:books: Documentation preview :books:: https://sqlite-utils--591.org.readthedocs.build/en/591/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/591/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
0  
1553425465 I_kwDOCGYnMM5cl2Q5 522 Add COLUMN_TYPE_MAPPING for timedelta maport 81377 closed 0     0 2023-01-23T16:49:54Z 2023-11-04T00:49:51Z 2023-11-04T00:49:51Z NONE  

Currently trying to create a column with Python type datetime.timedelta results in an error:

```

from sqlite_utils import Database db = Database("test.db") test_tbl = db['test'] test_tbl.insert({'col1': datetime.timedelta()}) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py", line 2979, in insert return self.insert_all( File "/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py", line 3082, in insert_all self.create( File "/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py", line 1574, in create self.db.create_table( File "/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py", line 961, in create_table sql = self.create_table_sql( File "/usr/local/lib/python3.10/dist-packages/sqlite_utils/db.py", line 852, in create_table_sql column_type=COLUMN_TYPE_MAPPING[column_type], KeyError: <class 'datetime.timedelta'> ```

The reason this would be useful is that MySQLdb uses timedelta for MySQL TIME columns:

```

import MySQLdb conn = MySQLdb.connect(host='database', user='user', passwd='pw') csr = conn.cursor() csr.execute("SELECT CAST('11:20' AS TIME)") tuple(csr) ((datetime.timedelta(seconds=40800),),) ```

So currently any attempt to convert a MySQL DB with a TIME column using db-to-sqlite will result in the above error.

I was rather surprised that MySQLdb uses timedelta for TIME columns but I see that this column type is intended for time intervals as well as the time of day so it makes sense.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/522/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1919296686 PR_kwDOCGYnMM5bifPC 596 Fixes mapping for time fields related to mysql, closes #522 nezhar 4420927 closed 0     1 2023-09-29T13:41:48Z 2023-11-04T00:49:50Z 2023-11-04T00:49:50Z CONTRIBUTOR simonw/sqlite-utils/pulls/596

Adds COLUMN_TYPE_MAPPING for TIME fields that are mapped as datetime.timedelta for MySQL and json represantation for datetime.timedelta in order to fix #522


:books: Documentation preview :books:: https://sqlite-utils--596.org.readthedocs.build/en/596/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/596/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1926729132 PR_kwDOCGYnMM5b7Z_y 598 Fixed issue #433 - CLI eats cursor spookylukey 62745 closed 0     2 2023-10-04T18:06:58Z 2023-11-04T00:46:55Z 2023-11-04T00:40:30Z CONTRIBUTOR simonw/sqlite-utils/pulls/598

The issue is that underlying iterator is not fully consumed within the body of the with file_progress() block. Instead, that block creates generator expressions like docs = (dict(zip(headers, row)) for row in reader)

These iterables are consumed later, outside the with file_progress() block, which consumes the underlying iterator, and in turn updates the progress bar.

This means that the ProgressBar.__exit__ method gets called before the last time the ProgressBar.update method gets called. The result is that the code to make the cursor invisible (inside the update() method) is called after the cleanup code to make it visible (in the __exit__ method).

The fix is to move consumption of the docs iterators within the progress bar block. (

(An additional fix, to make ProgressBar more robust against this kind of misuse, would to make it refusing to update after its __exit__ method had been called, just like files cannot be read() after they are closed. That requires a in the click library).

Note that Github diff obscures the simplicity of this diff, it's just indenting a block of code.


:books: Documentation preview :books:: https://sqlite-utils--598.org.readthedocs.build/en/598/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/598/reactions",
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 1,
    "eyes": 0
}
0  
1239034903 I_kwDOCGYnMM5J2iwX 433 CLI eats my cursor chapmanjacobd 7908073 closed 0     10 2022-05-17T18:52:52Z 2023-11-04T00:46:30Z 2023-11-04T00:46:30Z CONTRIBUTOR  

I'm not sure why this happens but sqlite-utils makes my terminal cursor disappear after running commands like sqlite-utils insert. I've only noticed this behavior in sqlite-utils, not in any other CLI tools

I can still type commands after it runs but the text cursor is invisible

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/433/reactions",
    "total_count": 5,
    "+1": 5,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1977004379 PR_kwDOCGYnMM5elFZf 600 Add spatialite arm64 linux path MikeCoats 37802088 closed 0     5 2023-11-03T22:23:26Z 2023-11-04T00:34:33Z 2023-11-04T00:31:49Z CONTRIBUTOR simonw/sqlite-utils/pulls/600

According to both Debian and Ubuntu, the correct “target triple” for arm64 is aarch64-linux-gnu, so we should be looking in /usr/lib/aarch64-linux-gnu for mod_spatialite.so.

I can confirm that on both of my Debian arm64 SBCs, libsqlite3-mod-spatialite installs to that path.

$ ls -l /usr/lib/*/*spatial* lrwxrwxrwx 1 root root 23 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so -> mod_spatialite.so.7.1.0 lrwxrwxrwx 1 root root 23 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7 -> mod_spatialite.so.7.1.0 -rw-r--r-- 1 root root 7348584 Dec 1 2022 /usr/lib/aarch64-linux-gnu/mod_spatialite.so.7.1.0

This is a set of before and after snippets of pytest’s output for this PR.

Before

$ pytest tests/test_get.py ...... [ 73%] tests/test_gis.py ssssssssssss [ 75%] tests/test_hypothesis.py .... [ 75%]

After

$ pytest tests/test_get.py ...... [ 73%] tests/test_gis.py ............ [ 75%] tests/test_hypothesis.py .... [ 75%]

Issue: #599


:books: Documentation preview :books:: https://sqlite-utils--600.org.readthedocs.build/en/600/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/600/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
684961449 MDU6SXNzdWU2ODQ5NjE0NDk= 949 Try out CodeMirror SQL hints simonw 9599 closed 0     5 2020-08-24T20:58:21Z 2023-11-03T05:28:58Z 2020-11-01T03:29:48Z OWNER  

It would also be interesting to try out the SQL hint mode, which can autocomplete against tables and columns. This demo shows how to configure that: https://codemirror.net/mode/sql/

Some missing documentation: https://stackoverflow.com/questions/20023381/codemirror-how-add-tables-to-sql-hint Originally posted by @simonw in https://github.com/simonw/datasette/issues/948#issuecomment-679355426

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/949/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
410384988 MDU6SXNzdWU0MTAzODQ5ODg= 411 How to pass named parameter into spatialite MakePoint() function dazzag24 1055831 closed 0     3 2019-02-14T16:30:22Z 2023-10-25T13:23:04Z 2019-05-05T12:25:04Z NONE  

Hi, datasette version: "0.26.2" extensions: spatialite: "4.4.0-RC0" sqlite version: "3.22.0"

I have a table of airports with latitude and longitude columns. I've added spatialite (with KNN support). After creating the db using csvs-to-sqlit, I run these commands to setup the spatialite tables:

``` conn.execute('SELECT InitSpatialMetadata(1)')

conn.execute("SELECT AddGeometryColumn('airports', 'point_geom', 4326, 'POINT', 2);")

conn.execute('''UPDATE airports SET point_geom = GeomFromText('POINT('||"longitude"||' '||"latitude"||')',4326);''')

conn.execute("SELECT CreateSpatialIndex('airports', 'point_geom');") ```

I'm attempting to create a canned query and have this in my metadata.json file: "find_airports_nearest_to_point":{ "sql":"SELECT a.pos AS rank, b.id, b.name, b.country, b.latitude AS latitude, b.longitude AS longitude, a.distance / 1000.0 AS dist_km FROM KNN AS a JOIN airports AS b ON (b.rowid = a.fid) WHERE f_table_name = \"airports\" AND ref_geometry = MakePoint( :Long , :Lat ) AND max_items = 10;"} which doesn't seem to perform the templating of the name parameters correctly and I get no results.

Have also tired: MakePoint( || :Long || , || :Lat || ) which returns this error: near "||": syntax error

However I cannot seem to find the correct combination of named parameter syntax (:Lat) or sqlite concatenation operator to make it work. Any ideas if using named parameters inside functions is supported?

Thanks Darren

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/411/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1949756141 PR_kwDOBm6k_c5dJF8z 2200 Bump the python-packages group with 1 update dependabot[bot] 49699333 closed 0     1 2023-10-18T13:25:55Z 2023-10-24T13:40:29Z 2023-10-24T13:40:26Z CONTRIBUTOR simonw/datasette/pulls/2200

Bumps the python-packages group with 1 update: black.

Release notes

Sourced from black's releases.

23.10.0

Stable style

  • Fix comments getting removed from inside parenthesized strings (#3909)

Preview style

  • Fix long lines with power operators getting split before the line length (#3942)
  • Long type hints are now wrapped in parentheses and properly indented when split across multiple lines (#3899)
  • Magic trailing commas are now respected in return types. (#3916)
  • Require one empty line after module-level docstrings. (#3932)
  • Treat raw triple-quoted strings as docstrings (#3947)

Configuration

  • Fix cache versioning logic when BLACK_CACHE_DIR is set (#3937)

Parser

  • Fix bug where attributes named type were not acccepted inside match statements (#3950)
  • Add support for PEP 695 type aliases containing lambdas and other unusual expressions (#3949)

Output

  • Black no longer attempts to provide special errors for attempting to format Python 2 code (#3933)
  • Black will more consistently print stacktraces on internal errors in verbose mode (#3938)

Integrations

  • The action output displayed in the job summary is now wrapped in Markdown (#3914)
Changelog

Sourced from black's changelog.

23.10.0

Stable style

  • Fix comments getting removed from inside parenthesized strings (#3909)

Preview style

  • Fix long lines with power operators getting split before the line length (#3942)
  • Long type hints are now wrapped in parentheses and properly indented when split across multiple lines (#3899)
  • Magic trailing commas are now respected in return types. (#3916)
  • Require one empty line after module-level docstrings. (#3932)
  • Treat raw triple-quoted strings as docstrings (#3947)

Configuration

  • Fix cache versioning logic when BLACK_CACHE_DIR is set (#3937)

Parser

  • Fix bug where attributes named type were not acccepted inside match statements (#3950)
  • Add support for PEP 695 type aliases containing lambdas and other unusual expressions (#3949)

Output

  • Black no longer attempts to provide special errors for attempting to format Python 2 code (#3933)
  • Black will more consistently print stacktraces on internal errors in verbose mode (#3938)

Integrations

  • The action output displayed in the job summary is now wrapped in Markdown (#3914)
Commits
  • 9edba85 Prepare release 23.10.0 (#3951)
  • bb58807 Fix parser bug where "type" was misinterpreted as a keyword inside a match (#...
  • 722735d Fix grammar for type alias support (#3949)
  • abe57e3 Treat raw strings like other docstrings (#3947)
  • 1648ac5 Fix long lines with power operator(s) getting splitted before line length (#3...
  • 6f84f65 Migrate mypy config to pyproject.toml (#3936)
  • 3bb9214 CI Test: Deprecating 'Healthcheck.all()' from Hypothesis in fuzz.py (#3945)
  • 935f303 Fix test that was not being run (#3939)
  • b7717c3 Standardise newlines after module-level docstrings (#3932)
  • 7aa37ea Report all stacktraces in verbose mode (#3938)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2200.org.readthedocs.build/en/2200/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2200/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1955676270 I_kwDOBm6k_c50kUBu 2201 Discord invite link is invalid andrewsanchez 11708906 open 0     0 2023-10-21T21:50:05Z 2023-10-21T21:50:05Z   NONE  

https://datasette.io/discord leads to https://discord.com/invite/ktd74dm5mw and returns the following:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2201/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1163369515 I_kwDOBm6k_c5FV5wr 1655 query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data fgregg 536941 open 0     8 2022-03-09T00:56:40Z 2023-10-17T21:53:17Z   CONTRIBUTOR  

this page

is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb.

it's using over a 1G on chrome 99.

i found this because, i was trying to figure out why editing the SQL was getting very slow.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1651082214 PR_kwDOBm6k_c5NcFCf 2052 feat: Javascript Plugin API (Custom panels, column menu items with JS actions) hydrosquall 9020979 closed 0 simonw 9599   20 2023-04-02T20:23:44Z 2023-10-14T17:49:03Z 2023-10-13T00:00:27Z CONTRIBUTOR simonw/datasette/pulls/2052

Motivation

  • Allow plugins that add data visualizations datasette-vega, datasette-leaflet, and datasette-nteract-data-explorer to co-exist safely
  • Standardize APIs / hooks to ease development for new JS plugin developers (better compat with datasette-lite) through standardized DOM selectors, methods for extending the existing Table UI. This has come up as a feature request several times (see research notes for examples)
  • Discussion w/ @simonw about a general-purpose Datasette JS API

Changes

Summary: Provide 2 new surface areas for Datasette JS plugin developers. See alpha documentation

  1. Custom column header items: https://a.cl.ly/Kou97wJr
  2. Basic "panels" controlled by buttons: https://a.cl.ly/rRugWobd

User Facing Changes

  • Allow creating menu items under table header that triggers JS (instead of opening hrefs per the existing menu_link hook). Items can respond to any column metadata provided by the column header (e.g. label). The proof of concept plugins log data to the console, or copy the column name to clipboard.
  • Allow plugins to register UI elements in a panel controller. The parent component handles switching the visibility of active plugins.
  • Because native button elements are used, the panel is keyboard-accessible - use tab / shift-tab to cycle through tab options, and enter to select.
  • There's room to improve the styling, but the focus of this PR is on the API rather than the UX.

(plugin) Developer Facing Changes

  • Dispatch a datasette_init CustomEvent when the datasetteManager is finished loading.
  • Provide manager.registerPlugin API for adding new functionality that coordinates with Datasette lifecycle events.
  • Provide a manager.selectors map of DOM elements that users may want to hook into.
  • Updated table.js to use refer to these to motivating keeping things in sync
  • Allow plugins to register themselves with 2 hooks:
  • makeColumnActions: Add items to menu in table column headers. Users can provide a label, and either href or onClick with full access to the metadata for the clicked column (name, type, misc. booleans)
  • makeAboveTablePanelConfigs: Add items to the panel. Each panel has a unique ID (namespaced within that plugin), a render function, and a string label.

See this file for example plugin usage.

Core Developer Facing Changes

  • Modified table.js to make use of the datasetteManager API.
  • Added example plugins to the demos/plugins folder, and stored the test js in the statics/ folder

Testing

For Datasette plugin developers, please see the alpha-level documentation .

To run the examples:

bash datasette serve fixtures.db --plugins-dir=demos/plugins/

Open local server: http://127.0.0.1:8001/fixtures/facetable

Open to all feedback on this PR, from API design to variable naming, to what additional hooks might be useful for the future.

My focus was more on the general shape of the API for developers, rather than on the UX of the test plugins.

Design notes

  • The manager tab panel could be a separate plugin if the implementation is too custom.
  • The makeColumnHeaderItems benefits from hooking into the logic of table.js
  • I wanted to offer this to the Datasette core, since the datasette-manager would be more powerful if it were connected to lifecycle and JS events that are part of the existing table.js.
  • Non-goals:
  • Dependency management (for now) - there's no "build" step, we don't know when new plugins will be added. While there are some valid use cases (for example, allow multiple plugins to wait for a global leaflet object to be loaded), I don't see enough use-cases to justify doing this yet.
  • Enabling single-page-app features - for now, most datasette actions lead to a new page being loaded. SPA development offers some benefits (no page jumping after clicking on a link), but also complexity that doesn't need to be in the core Datasette project.

Research Notes

  • Relocated to a comment, as this isn't required to review when evaluating the plugin. Including it just for those who are curious.
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2052/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1943259395 I_kwDOEhK-wc5z08kD 16 time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ' linonetwo 3746270 open 0     0 2023-10-14T13:24:39Z 2023-10-14T13:24:39Z   NONE  

evernote-to-sqlite enex evernote.db ./我的笔记.enex Importing from ENEX [#####-------------------------------] 14% Traceback (most recent call last): File "/usr/local/bin/evernote-to-sqlite", line 8, in <module> sys.exit(cli()) ^^^^^ File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/cli.py", line 31, in enex save_note(db, note) File "/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py", line 46, in save_note "created": convert_datetime(created), ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/evernote_to_sqlite/utils.py", line 111, in convert_datetime return datetime.datetime.strptime(s, "%Y%m%dT%H%M%SZ").isoformat() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/_strptime.py", line 349, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '2014-11-21T11:44:12.000Z' does not match format '%Y%m%dT%H%M%SZ'

enex is exported by evernote mac client

evernote-to-sqlite 303218369 issue    
{
    "url": "https://api.github.com/repos/dogsheep/evernote-to-sqlite/issues/16/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1910269679 I_kwDOBm6k_c5x3Gbv 2196 Discord invite link returns 401 Olshansk 1892194 closed 0     2 2023-09-24T15:16:54Z 2023-10-13T00:07:08Z 2023-10-12T21:54:54Z NONE  

I found the link to the datasette discord channel via this query.

The following video should be self explanatory:

https://github.com/simonw/datasette/assets/1892194/8cd33e88-bcaa-41f3-9818-ab4d589c3f02

Link for reference: https://discord.com/invite/ktd74dm5mw

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2196/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1940346034 I_kwDOBm6k_c5zp1Sy 2199 Detailed upgrade instructions for metadata.yaml -> datasette.yaml simonw 9599 open 0   Datasette 1.0 3268330 7 2023-10-12T16:21:25Z 2023-10-12T22:08:42Z   OWNER  

Exception: Datasette no longer accepts plugin configuration in --metadata. Move your "plugins" configuration blocks to a separate file - we suggest calling that datasette..json - and start Datasette with datasette -c datasette..json. See https://docs.datasette.io/en/latest/configuration.html for more details.

I think we should link directly to documentation that tells people how to perform this upgrade.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/2190#issuecomment-1759947021

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2199/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1930008379 I_kwDOBm6k_c5zCZc7 2197 click-default-group-wheel dependency conflict ar-jan 1176293 closed 0     3 2023-10-06T11:49:20Z 2023-10-12T21:53:17Z 2023-10-12T21:53:17Z NONE  

I upgraded my dependencies, then ran into this problem running datasette inspect:

env/lib/python3.9/site-packages/datasette/cli.py", line 6, in <module> from click_default_group import DefaultGroup ModuleNotFoundError: No module named 'click_default_group'

Turns out the released version of datasette still depends on click-default-group-wheel, so click-default-group doesn't get installed/recognized:

$ virtualenv venv $ source venv/bin/activate $ pip install datasette $ pip list | grep click-default-group click-default-group 1.2.4 click-default-group-wheel 1.2.3 $ python -c "from click_default_group import DefaultGroup" Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'click_default_group' $ pip install --force-reinstall click-default-group ... ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasette 0.64.4 requires click-default-group-wheel>=1.2.2, which is not installed. Successfully installed click-8.1.7 click-default-group-1.2.4

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2197/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1901483874 PR_kwDOBm6k_c5amULw 2190 Raise an exception if a "plugins" block exists in metadata.json asg017 15178711 closed 0     5 2023-09-18T18:08:56Z 2023-10-12T16:20:51Z 2023-10-12T16:20:51Z CONTRIBUTOR simonw/datasette/pulls/2190

refs #2183 #2093

From this comment in #2183: If a "plugins" block appears in metadata.json, it means that a user hasn't migrated over their plugin configuration from metadata.json to datasette.yaml, which is a breaking change in Datasette 1.0.

This PR will ensure that an error is raised whenever that happens.


:books: Documentation preview :books:: https://datasette--2190.org.readthedocs.build/en/2190/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2190/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1901768721 PR_kwDOBm6k_c5anSg5 2191 Move `permissions`, `allow` blocks, canned queries and more out of `metadata.yaml` and into `datasette.yaml` asg017 15178711 closed 0     4 2023-09-18T21:21:16Z 2023-10-12T16:16:38Z 2023-10-12T16:16:38Z CONTRIBUTOR simonw/datasette/pulls/2191

The PR moves the following fields from metadata.yaml to datasette.yaml:

permissions allow allow_sql queries extra_css_urls extra_js_urls

This is a significant breaking change that users will need to upgrade their metadata.yaml files for. But the format/locations are similar to the previous version, so it shouldn't be too difficult to upgrade.

One note: I'm still working on the Configuration docs, specifically the "reference" section. Though it's pretty small, the rest of read to review

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2191/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1931794126 I_kwDOBm6k_c5zJNbO 2198 --load-extension=spatialite not working with Windows hcarter333 363004 open 0     0 2023-10-08T12:50:22Z 2023-10-08T12:50:22Z   NONE  

Using each of python -m datasette counties.db -m metadata.yml --load-extension=SpatiaLite

and

python -m datasette counties.db --load-extension="C:\Windows\System32\mod_spatialite.dll"

and

python -m datasette counties.db --load-extension=C:\Windows\System32\mod_spatialite.dll

I got the error:

``` File "C:\Users\m3n7es\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\datasette\database.py", line 209, in in_thread self.ds._prepare_connection(conn, self.name) File "C:\Users\m3n7es\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\datasette\app.py", line 596, in _prepare_connection conn.execute("SELECT load_extension(?, ?)", [path, entrypoint]) sqlite3.OperationalError: The specified module could not be found.

```

I finally tried modifying the code in app.py to read:

``` def _prepare_connection(self, conn, database): conn.row_factory = sqlite3.Row conn.text_factory = lambda x: str(x, "utf-8", "replace") if self.sqlite_extensions: conn.enable_load_extension(True) for extension in self.sqlite_extensions: # "extension" is either a string path to the extension # or a 2-item tuple that specifies which entrypoint to load. #if isinstance(extension, tuple): # path, entrypoint = extension # conn.execute("SELECT load_extension(?, ?)", [path, entrypoint]) #else: conn.execute("SELECT load_extension('C:\Windows\System32\mod_spatialite.dll')")

``` At which point the counties example worked.

Is there a correct way to install/use the extension on Windows? My method will cause issues if there's a second extension to be used.

On an unrelated note, my next step is to figure out how to write a query across the two loaded databases supplied from the command line: python -m datasette rm_toucans_23_10_07.db counties.db -m metadata.yml --load-extension=SpatiaLite

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2198/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1426379903 PR_kwDOBm6k_c5BtJNn 1870 don't use immutable=1, only mode=ro fgregg 536941 open 0     7 2022-10-27T23:33:04Z 2023-10-03T19:12:37Z   CONTRIBUTOR simonw/datasette/pulls/1870

Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480

That this happens in "immutable" mode is surprising, because the sqlite docs say that setting this should open the database as read only.

https://www.sqlite.org/c3ref/open.html

immutable: The immutable parameter is a boolean query parameter that indicates that the database file is stored on read-only media. When immutable is set, SQLite assumes that the database file cannot be changed, even by a process with higher privilege, and so the database is opened read-only and all locking and change detection is disabled. Caution: Setting the immutable property on a database file that does in fact change can result in incorrect query results and/or SQLITE_CORRUPT errors. See also: SQLITE_IOCAP_IMMUTABLE.

Perhaps this is a bug in sqlite?


:books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1870/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1920416843 I_kwDOCGYnMM5ydzxL 597 sqlite-utils insert-files should be able to convert fields grimnight 1737541 open 0     0 2023-09-30T22:20:47Z 2023-09-30T22:20:47Z   NONE  

Currently using both insert-files and convert is needed in order to create sqlar files, it would be more convenient if it could be done with just one command.

```shell ~ ❯ cat test.py import os

class Example: def init(self, arg1, arg2): self.arg1 = arg1

~ ❯ sqlite-utils insert-files test.sqlar sqlar test.py -c name:name -c data:content -c mode:mode -c mtime:mtime -c sz:size --pk=name [####################################] 100%

~ ❯ sqlite-utils convert test.sqlar sqlar data "zlib.compress(value)" --import=zlib --where "name = 'test.py'" [####################################] 100%

~ ❯ cat test.py | sqlite-utils convert test.sqlar sqlar data "zlib.compress(sys.stdin.buffer.read())" --import=zlib --import=sys --where "name = 'test.py'" # Alternative way [####################################] 100%

~ ❯ sqlite3 test.sqlar "SELECT hex(data) FROM sqlar WHERE name = 'test.py';" | python3 -c "import sys, zlib; sys.stdout.buffer.write(zlib.decompress(bytes.fromhex(sys.stdin.read())))" import os

class Example: def init(self, arg1, arg2): self.arg1 = arg1

~ ❯ rm test.py

~ ❯ sqlar -l test.sqlar test.py

~ ❯ sqlar -x test.sqlar

~ ❯ cat test.py import os

class Example: def init(self, arg1, arg2): self.arg1 = arg1

```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/597/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
777333388 MDU6SXNzdWU3NzczMzMzODg= 1168 Mechanism for storing metadata in _metadata tables simonw 9599 open 0     21 2021-01-01T18:47:27Z 2023-09-28T18:29:05Z   OWNER  

Original title: Perhaps metadata should all live in a _metadata in-memory database

Inspired by #1150 - metadata should be exposed as an API, and for large Datasette instances that API may need to be paginated. So why not expose it through an in-memory database table?

One catch to this: plugins. #860 aims to add a plugin hook for metadata. But if the metadata comes from an in-memory table, how do the plugins interact with it?

The need to paginate over metadata does make a plugin hook that returns metadata for an individual table seem less wise, since we don't want to have to do 10,000 plugin hook invocations to show a list of all metadata.

If those plugins write directly to the in-memory table how can their contributions survive the server restarting?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1168/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1865572575 PR_kwDOBm6k_c5Yt2eO 2155 Fix hupper.start_reloader entry point cadeef 79087 open 0     2 2023-08-24T17:14:08Z 2023-09-27T18:44:02Z   FIRST_TIME_CONTRIBUTOR simonw/datasette/pulls/2155

Update hupper's entry point so that click commands are processed properly.

Fixes #2123


:books: Documentation preview :books:: https://datasette--2155.org.readthedocs.build/en/2155/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2155/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 2,
    "eyes": 0
}
0  
1907281675 I_kwDOCGYnMM5xrs8L 595 Cascading DELETE not working with Table.delete(pk) cycle-data 123451970 closed 0     1 2023-09-21T15:46:41Z 2023-09-25T09:38:57Z 2023-09-25T09:38:13Z NONE  

Hi ! I noticed that when I am trying to use the delete method of the Table object, the record get properly deleted from the table, but the cascading delete triggers on foreign keys do not activate.

self.db["contact"].delete(contact_id)

I tried querying the database directly via DB Browser and the triggers work without any issue. Looked up the source code and behind the scene this method is just querying the database normally so I'm not exactly sure where this behavior comes from.

Thank you in advance for your time !

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/595/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
944846776 MDU6SXNzdWU5NDQ4NDY3NzY= 297 Option for importing CSV data using the SQLite .import mechanism simonw 9599 open 0     23 2021-07-14T22:36:41Z 2023-09-22T20:49:52Z   OWNER  

As seen in https://til.simonwillison.net/sqlite/import-csv - .mode csv and then .import school.csv schools is hugely faster than importing via sqlite-utils insert and doing the work in Python - but it can only be implemented by shelling out to the sqlite3 CLI tool, it's not functionality that is exposed to the Python sqlite3 module.

An option to use this would be useful - maybe something like this:

sqlite-utils insert blah.db blah blah.csv --fast
sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/297/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1907765514 I_kwDOBm6k_c5xtjEK 2195 `datasette publish` needs support for the new config/metadata split simonw 9599 open 0     9 2023-09-21T21:08:12Z 2023-09-21T22:57:48Z   OWNER  

... which raises the challenge that datasette publish doesn't yet know what to do with a config file!

Originally posted by @simonw in https://github.com/simonw/datasette/issues/2194#issuecomment-1730259871

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2195/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1901416155 I_kwDOBm6k_c5xVU7b 2189 Server hang on parallel execution of queries to named in-memory databases simonw 9599 closed 0     31 2023-09-18T17:23:18Z 2023-09-21T22:26:21Z 2023-09-21T22:26:21Z OWNER  

I've started to encounter a bug where queries to tables inside named in-memory databases sometimes trigger server hangs.

I'm still trying to figure out what's going on here - on one occasion I managed to Ctrl+C the server and saw an exception that mentioned a thread lock, but usually hitting Ctrl+C does nothing and I have to kill -9 the PID instead.

This is all running on my M2 Mac.

I've seen the bug in the Datasette 1.0 alphas and in Datasette 0.64.3 - but reverting to 0.61 appeared to fix it.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2189/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1662951875 I_kwDOBm6k_c5jHqHD 2057 DeprecationWarning: pkg_resources is deprecated as an API simonw 9599 closed 0     25 2023-04-11T17:41:20Z 2023-09-21T22:09:10Z 2023-09-21T22:09:10Z OWNER  

Got this running tests against Python 3.11.

../../../.local/share/virtualenvs/datasette-big-local-6Yn-280V/lib/python3.11/site-packages/datasette/app.py:14: in <module> import pkg_resources ../../../.local/share/virtualenvs/datasette-big-local-6Yn-280V/lib/python3.11/site-packages/pkg_resources/__init__.py:121: in <module> warnings.warn("pkg_resources is deprecated as an API", DeprecationWarning) E DeprecationWarning: pkg_resources is deprecated as an API

I ran with pytest -Werror --pdb -x to get the debugger for that warning, but it turned out searching the code worked better. It's used in these two places:

https://github.com/simonw/datasette/blob/5890a20c374fb0812d88c9b0ef26a838bfa06c76/datasette/plugins.py#L43-L50

https://github.com/simonw/datasette/blob/5890a20c374fb0812d88c9b0ef26a838bfa06c76/datasette/app.py#L1037

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2057/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1907695234 I_kwDOBm6k_c5xtR6C 2194 Deploy failing with "plugins/alternative_route.py: Not a directory" simonw 9599 closed 0     8 2023-09-21T20:17:49Z 2023-09-21T22:08:19Z 2023-09-21T22:08:19Z OWNER  

https://github.com/simonw/datasette/actions/runs/6266449018/job/17017460074

This is a bit of a mystery, I don't think I've changed anything recently that could have broken this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2194/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1907655261 I_kwDOBm6k_c5xtIJd 2193 "Test DATASETTE_LOAD_PLUGINS" test shows errors but did not fail the CI run simonw 9599 closed 0     6 2023-09-21T19:49:34Z 2023-09-21T21:56:43Z 2023-09-21T21:56:43Z OWNER  

That passed on 3.8 but should have failed: https://github.com/simonw/datasette/actions/runs/6266341481/job/17017099801 - the "Test DATASETTE_LOAD_PLUGINS" test shows errors but did not fail the CI run.

Originally posted by @simonw in https://github.com/simonw/datasette/issues/2057#issuecomment-1730201226

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2193/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1896578249 PR_kwDOBm6k_c5aWACP 2185 Bump the python-packages group with 3 updates dependabot[bot] 49699333 closed 0     1 2023-09-14T13:27:40Z 2023-09-20T22:11:25Z 2023-09-20T22:11:24Z CONTRIBUTOR simonw/datasette/pulls/2185

Bumps the python-packages group with 3 updates: sphinx, furo and black.

Updates sphinx from 7.2.5 to 7.2.6

Release notes

Sourced from sphinx's releases.

Sphinx 7.2.6

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Changelog

Sourced from sphinx's changelog.

Release 7.2.6 (released Sep 13, 2023)

Bugs fixed

  • #11679: Add the :envvar:!SPHINX_AUTODOC_RELOAD_MODULES environment variable, which if set reloads modules when using autodoc with TYPE_CHECKING = True. Patch by Matt Wozniski and Adam Turner.
  • #11679: Use :py:func:importlib.reload to reload modules in autodoc. Patch by Matt Wozniski and Adam Turner.
Commits
  • cf7d275 Bump to 7.2.6 final
  • 43d6975 Leverage importlib.reload for reloading modules (#11679)
  • 13da5d7 Inline makecmd in make mode
  • 3d0110a Enable test_cython on Python 3.12
  • 22759fb Bump version
  • See full diff in compare view


Updates furo from 2023.8.19 to 2023.9.10

Changelog

Sourced from furo's changelog.

Changelog

2023.09.10 -- Zesty Zaffre

  • Make asset hash injection idempotent, fixing Sphinx 6 compatibility.
  • Fix the check for HTML builders, fixing non-HTML Read the Docs builds.

2023.08.19 -- Xenolithic Xanadu

  • Fix missing search context with Sphinx 7.2, for dirhtml builds.
  • Drop support for Python 3.7.
  • Present configuration errors in a better format -- thanks @​AA-Turner!
  • Bump require_sphinx() to Sphinx 6.0, in line with dependency changes in Unassuming Ultramarine.

2023.08.17 -- Wonderous White

  • Fix compatiblity with Sphinx 7.2.0 and 7.2.1.

2023.07.26 -- Vigilant Volt

  • Fix compatiblity with Sphinx 7.1.
  • Improve how content overflow is handled.
  • Improve how literal blocks containing inline code are handled.

2023.05.20 -- Unassuming Ultramarine

  • ✨ Add support for Sphinx 7.
  • Drop support for Sphinx 5.
  • Improve the screen-reader label for sidebar collapse.
  • Make it easier to create derived themes from Furo.
  • Bump all JS dependencies (NodeJS and npm packages).

2023.03.27 -- Tasty Tangerine

  • Regenerate with newer version of sphinx-theme-builder, to fix RECORD hashes.
  • Add missing class to Font Awesome examples

2023.03.23 -- Sassy Saffron

... (truncated)

Commits
  • 2718ca4 Prepare release: 2023.09.10
  • c22c99d Update changelog
  • c37e849 Quote a not-runtime-generic type annotation
  • 9cfdf44 Rework infrastructure for linting
  • 5abeb9f Fix the check for HTML builders
  • ee2ab54 Tweak how tests are run with nox
  • cdae236 Test against Sphinx minor versions in CI
  • 9e40071 Make asset hash injection idempotent
  • aab86f4 Revert "Exclude incompatible Sphinx releases (#711)"
  • 4dd6eec Exclude incompatible Sphinx releases (#711)
  • Additional commits viewable in compare view


Updates black from 23.7.0 to 23.9.1

Release notes

Sourced from black's releases.

23.9.1

Due to various issues, the previous release (23.9.0) did not include compiled mypyc wheels, which make Black significantly faster. These issues have now been fixed, and this release should come with compiled wheels once again.

There will be no wheels for Python 3.12 due to a bug in mypyc. We will provide 3.12 wheels in a future release as soon as the mypyc bug is fixed.

Packaging

  • Upgrade to mypy 1.5.1 (#3864)

Performance

  • Store raw tuples instead of NamedTuples in Black's cache, improving performance and decreasing the size of the cache (#3877)

23.9.0

Preview style

  • More concise formatting for dummy implementations (#3796)
  • In stub files, add a blank line between a statement with a body (e.g an if sys.version_info > (3, x):) and a function definition on the same level (#3862)
  • Fix a bug whereby spaces were removed from walrus operators within subscript(#3823)

Configuration

  • Black now applies exclusion and ignore logic before resolving symlinks (#3846)

Performance

  • Avoid importing IPython if notebook cells do not contain magics (#3782)
  • Improve caching by comparing file hashes as fallback for mtime and size (#3821)

Blackd

  • Fix an issue in blackd with single character input (#3558)

Integrations

  • Black now has an official pre-commit mirror. Swapping https://github.com/psf/black to https://github.com/psf/black-pre-commit-mirror in your .pre-commit-config.yaml will make Black about 2x faster (#3828)
  • The .black.env folder specified by ENV_PATH will now be removed on the completion of the GitHub Action (#3759)
Changelog

Sourced from black's changelog.

23.9.1

Due to various issues, the previous release (23.9.0) did not include compiled mypyc wheels, which make Black significantly faster. These issues have now been fixed, and this release should come with compiled wheels once again.

There will be no wheels for Python 3.12 due to a bug in mypyc. We will provide 3.12 wheels in a future release as soon as the mypyc bug is fixed.

Packaging

  • Upgrade to mypy 1.5.1 (#3864)

Performance

  • Store raw tuples instead of NamedTuples in Black's cache, improving performance and decreasing the size of the cache (#3877)

23.9.0

Preview style

  • More concise formatting for dummy implementations (#3796)
  • In stub files, add a blank line between a statement with a body (e.g an if sys.version_info > (3, x):) and a function definition on the same level (#3862)
  • Fix a bug whereby spaces were removed from walrus operators within subscript(#3823)

Configuration

  • Black now applies exclusion and ignore logic before resolving symlinks (#3846)

Performance

  • Avoid importing IPython if notebook cells do not contain magics (#3782)
  • Improve caching by comparing file hashes as fallback for mtime and size (#3821)

Blackd

  • Fix an issue in blackd with single character input (#3558)

Integrations

  • Black now has an official pre-commit mirror. Swapping https://github.com/psf/black to https://github.com/psf/black-pre-commit-mirror in your .pre-commit-config.yaml will make Black about 2x faster (#3828)
  • The .black.env folder specified by ENV_PATH will now be removed on the completion of the GitHub Action (#3759)
Commits
  • e877371 Prepare release 23.9.1 (#3878)
  • 62dca32 mypyc builds on PRs, skip mypyc wheels for 3.12 (#3870)
  • 751583a Pickle raw tuples in FileData cache (#3877)
  • f791745 Re-export black.Mode (#3875)
  • 0b62b9c Ignore aiohttp DeprecationWarning for 3.12 (#3876)
  • c83ad6c Upgrade to Furo 2023.9.10 to fix docs build (#3873)
  • 4eebfd1 Add mypyc test marks to new tests that patch (#3871)
  • add161b Bump RTD Python version from 3.8 to 3.11 (#3868)
  • 4e93f2a Add classifier for 3.12 (#3866)
  • 716fa08 Upgrade mypy (#3864)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2185.org.readthedocs.build/en/2185/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2185/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1903932086 PR_kwDOBm6k_c5aumyn 2192 Stop using parallel SQL queries for tables simonw 9599 closed 0     1 2023-09-20T01:28:43Z 2023-09-20T22:10:56Z 2023-09-20T22:10:55Z OWNER simonw/datasette/pulls/2192

Refs: - #2189


:books: Documentation preview :books:: https://datasette--2192.org.readthedocs.build/en/2192/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2192/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1825007061 I_kwDOBm6k_c5sx2XV 2123 datasette serve when invoked with --reload interprets the serve command as a file cadeef 79087 open 0     2 2023-07-27T19:07:22Z 2023-09-18T13:02:46Z   NONE  

When running datasette serve with the --reload flag, the serve command is picked up as a file argument:

$ datasette serve --reload test_db Starting monitor for PID 13574. Error: Invalid value for '[FILES]...': Path 'serve' does not exist. Press ENTER or change a file to reload.

If a 'serve' file is created it launches properly (albeit with an empty database called serve):

$ touch serve; datasette serve --reload test_db Starting monitor for PID 13628. INFO: Started server process [13628] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)

Version (running from HEAD on main):

$ datasette --version datasette, version 1.0a2

This issue appears to have existed for awhile as https://github.com/simonw/datasette/issues/1380#issuecomment-953366110 mentions the error in a different context.

I'm happy to debug and land a patch if it's welcome.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2123/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1900026059 I_kwDOBm6k_c5xQBjL 2188 Plugin Hooks for "compile to SQL" languages asg017 15178711 open 0     2 2023-09-18T01:37:15Z 2023-09-18T06:58:53Z   CONTRIBUTOR  

There's a ton of tools/languages that compile to SQL, which may be nice in Datasette. Some examples:

  • Logica https://logica.dev
  • PRQL https://prql-lang.org
  • Malloy, but not sure if it works with SQLite? https://github.com/malloydata/malloy

It would be cool if plugins could extend Datasette to use these languages, in both the code editor and API usage.

A few things I'd imagine a datasette-prql or datasette-logica plugin would do:

  • prql= instead of sql=
  • Code editor support (syntax highlighting, autocomplete)
  • Hide/show SQL
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2188/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
787098345 MDU6SXNzdWU3ODcwOTgzNDU= 1191 Ability for plugins to collaborate when adding extra HTML to blocks in default templates simonw 9599 open 0   Datasette 1.0 3268330 12 2021-01-15T18:18:51Z 2023-09-18T06:55:52Z   OWNER  

Sometimes a plugin may want to add content to an existing default template - for example datasette-search-all adds a new search box at the top of index.html. I also want datasette-upload-csvs to add a CTA on the database.html page: https://github.com/simonw/datasette-upload-csvs/issues/18

Currently plugins can do this by providing a new version of the index.html template - but if multiple plugins try to do that only one of them will succeed.

It would be better if there were known areas of those templates which plugins could add additional content to, such that multiple plugins can use the same spot.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1191/reactions",
    "total_count": 4,
    "+1": 4,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1899310542 I_kwDOBm6k_c5xNS3O 2187 Datasette for serving JSON only geofinder 19705106 open 0     0 2023-09-16T05:48:29Z 2023-09-16T05:48:29Z   NONE  

Hi, is there any way to use datasette for serving json only without displaying webpage? I've tried to search about this in documentation but didn't get any information

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2187/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1898927976 I_kwDOBm6k_c5xL1do 2186 Mechanism for register_output_renderer hooks to access full count simonw 9599 open 0   Datasette 1.0 3268330 2 2023-09-15T18:57:54Z 2023-09-15T19:27:59Z   OWNER  

The cause of this bug: - https://github.com/simonw/datasette-export-notebook/issues/17

Is that datasette-export-notebook was consulting data["filtered_table_rows_count"] in the render output plugin function in order to show the total number of rows that would be exported.

That field is no longer available by default - the "count" field is only available if ?_extra=count was passed.

It would be useful if plugins like this could access the total count on demand, should they need to.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1890593563 PR_kwDOBm6k_c5aBx3g 2182 Bump the python-packages group with 2 updates dependabot[bot] 49699333 closed 0     1 2023-09-11T14:01:25Z 2023-09-14T13:27:30Z 2023-09-14T13:27:28Z CONTRIBUTOR simonw/datasette/pulls/2182

Bumps the python-packages group with 2 updates: furo and black.

Updates furo from 2023.8.19 to 2023.9.10

Changelog

Sourced from furo's changelog.

Changelog

2023.09.10 -- Zesty Zaffre

  • Make asset hash injection idempotent, fixing Sphinx 6 compatibility.
  • Fix the check for HTML builders, fixing non-HTML Read the Docs builds.

2023.08.19 -- Xenolithic Xanadu

  • Fix missing search context with Sphinx 7.2, for dirhtml builds.
  • Drop support for Python 3.7.
  • Present configuration errors in a better format -- thanks @​AA-Turner!
  • Bump require_sphinx() to Sphinx 6.0, in line with dependency changes in Unassuming Ultramarine.

2023.08.17 -- Wonderous White

  • Fix compatiblity with Sphinx 7.2.0 and 7.2.1.

2023.07.26 -- Vigilant Volt

  • Fix compatiblity with Sphinx 7.1.
  • Improve how content overflow is handled.
  • Improve how literal blocks containing inline code are handled.

2023.05.20 -- Unassuming Ultramarine

  • ✨ Add support for Sphinx 7.
  • Drop support for Sphinx 5.
  • Improve the screen-reader label for sidebar collapse.
  • Make it easier to create derived themes from Furo.
  • Bump all JS dependencies (NodeJS and npm packages).

2023.03.27 -- Tasty Tangerine

  • Regenerate with newer version of sphinx-theme-builder, to fix RECORD hashes.
  • Add missing class to Font Awesome examples

2023.03.23 -- Sassy Saffron

... (truncated)

Commits
  • 2718ca4 Prepare release: 2023.09.10
  • c22c99d Update changelog
  • c37e849 Quote a not-runtime-generic type annotation
  • 9cfdf44 Rework infrastructure for linting
  • 5abeb9f Fix the check for HTML builders
  • ee2ab54 Tweak how tests are run with nox
  • cdae236 Test against Sphinx minor versions in CI
  • 9e40071 Make asset hash injection idempotent
  • aab86f4 Revert "Exclude incompatible Sphinx releases (#711)"
  • 4dd6eec Exclude incompatible Sphinx releases (#711)
  • Additional commits viewable in compare view


Updates black from 23.7.0 to 23.9.1

Release notes

Sourced from black's releases.

23.9.1

Due to various issues, the previous release (23.9.0) did not include compiled mypyc wheels, which make Black significantly faster. These issues have now been fixed, and this release should come with compiled wheels once again.

There will be no wheels for Python 3.12 due to a bug in mypyc. We will provide 3.12 wheels in a future release as soon as the mypyc bug is fixed.

Packaging

  • Upgrade to mypy 1.5.1 (#3864)

Performance

  • Store raw tuples instead of NamedTuples in Black's cache, improving performance and decreasing the size of the cache (#3877)

23.9.0

Preview style

  • More concise formatting for dummy implementations (#3796)
  • In stub files, add a blank line between a statement with a body (e.g an if sys.version_info > (3, x):) and a function definition on the same level (#3862)
  • Fix a bug whereby spaces were removed from walrus operators within subscript(#3823)

Configuration

  • Black now applies exclusion and ignore logic before resolving symlinks (#3846)

Performance

  • Avoid importing IPython if notebook cells do not contain magics (#3782)
  • Improve caching by comparing file hashes as fallback for mtime and size (#3821)

Blackd

  • Fix an issue in blackd with single character input (#3558)

Integrations

  • Black now has an official pre-commit mirror. Swapping https://github.com/psf/black to https://github.com/psf/black-pre-commit-mirror in your .pre-commit-config.yaml will make Black about 2x faster (#3828)
  • The .black.env folder specified by ENV_PATH will now be removed on the completion of the GitHub Action (#3759)
Changelog

Sourced from black's changelog.

23.9.1

Due to various issues, the previous release (23.9.0) did not include compiled mypyc wheels, which make Black significantly faster. These issues have now been fixed, and this release should come with compiled wheels once again.

There will be no wheels for Python 3.12 due to a bug in mypyc. We will provide 3.12 wheels in a future release as soon as the mypyc bug is fixed.

Packaging

  • Upgrade to mypy 1.5.1 (#3864)

Performance

  • Store raw tuples instead of NamedTuples in Black's cache, improving performance and decreasing the size of the cache (#3877)

23.9.0

Preview style

  • More concise formatting for dummy implementations (#3796)
  • In stub files, add a blank line between a statement with a body (e.g an if sys.version_info > (3, x):) and a function definition on the same level (#3862)
  • Fix a bug whereby spaces were removed from walrus operators within subscript(#3823)

Configuration

  • Black now applies exclusion and ignore logic before resolving symlinks (#3846)

Performance

  • Avoid importing IPython if notebook cells do not contain magics (#3782)
  • Improve caching by comparing file hashes as fallback for mtime and size (#3821)

Blackd

  • Fix an issue in blackd with single character input (#3558)

Integrations

  • Black now has an official pre-commit mirror. Swapping https://github.com/psf/black to https://github.com/psf/black-pre-commit-mirror in your .pre-commit-config.yaml will make Black about 2x faster (#3828)
  • The .black.env folder specified by ENV_PATH will now be removed on the completion of the GitHub Action (#3759)
Commits
  • e877371 Prepare release 23.9.1 (#3878)
  • 62dca32 mypyc builds on PRs, skip mypyc wheels for 3.12 (#3870)
  • 751583a Pickle raw tuples in FileData cache (#3877)
  • f791745 Re-export black.Mode (#3875)
  • 0b62b9c Ignore aiohttp DeprecationWarning for 3.12 (#3876)
  • c83ad6c Upgrade to Furo 2023.9.10 to fix docs build (#3873)
  • 4eebfd1 Add mypyc test marks to new tests that patch (#3871)
  • add161b Bump RTD Python version from 3.8 to 3.11 (#3868)
  • 4e93f2a Add classifier for 3.12 (#3866)
  • 716fa08 Upgrade mypy (#3864)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2182.org.readthedocs.build/en/2182/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2182/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1895266807 I_kwDOBm6k_c5w93n3 2184 Design decision - should configuration be exposed at /-/config ? simonw 9599 open 0     0 2023-09-13T21:07:08Z 2023-09-13T21:07:38Z   OWNER  

This made me think. That {"$env": "ENV_VAR"} hack was introduced back here:

  • https://github.com/simonw/datasette/issues/538

The problem it was solving was that metadata was visible to everyone with access to the instance at /-/metadata but plugins clearly needed a way to set secret settings.

Now that this stuff is moving to config, we have some decisions to make:

  1. Add /-/config to let people see the configuration of their instance, and keep the $env trick for secret settings.
  2. Say all configuration aside from metadata is secret and make $env optional or ditch it entirely.
  3. Allow plugins to announce which of their configuration options are secret so we can automatically redact them from /-/config

I've found /-/metadata extraordinarily useful as a user of Datasette - it really helps me understand exactly what's going on if I run into any problems with a plugin, if I can quickly check what the settings look like.

So I'm leaning towards option 1 or 3.

Originally posted by @simonw in https://github.com/simonw/datasette/pull/2183#discussion_r1325076924

Also refs: - #2093

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2184/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1891212159 PR_kwDOBm6k_c5aD33C 2183 `datasette.yaml` plugin support asg017 15178711 closed 0     4 2023-09-11T20:26:04Z 2023-09-13T21:06:25Z 2023-09-13T21:06:25Z CONTRIBUTOR simonw/datasette/pulls/2183

Part of #2093

In #2149 , we ported over "settings.json" into the new datasette.yaml config file, with a top-level "settings" key. This PR ports over plugin configuration into top-level "plugins" key, as well as nested database/table plugin config.

From now on, no plugin-related configuration is allowed in metadata.yaml, and must be in datasette.yaml in this new format. This is a pretty significant breaking change. Thankfully, you should be able to copy-paste your legacy plugin key/values into the new datasette.yaml format.

An example of what datasette.yaml would look like with this new plugin config:

```yaml

plugins: datasette-my-plugin: config_key: value

databases: fixtures: plugins: datasette-my-plugin: config_key: fixtures-db-value tables: students: plugins: datasette-my-plugin: config_key: fixtures-students-table-value

```

As an additional benefit, this now works with the new -s flag:

bash datasette --memory -s 'plugins.datasette-my-plugin.config_key' new_value

Marked as a "Draft" right now until I add better documentation. We also should have a plan for the next alpha release to document and publicize this change, especially for plugin authors (since their docs will have to change to say datasette.yaml instead of metadata.yaml


:books: Documentation preview :books:: https://datasette--2183.org.readthedocs.build/en/2183/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2183/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1891614971 I_kwDOCGYnMM5wv8D7 594 Represent compound foreign keys in table.foreign_keys output simonw 9599 open 0     2 2023-09-12T03:48:24Z 2023-09-12T03:51:13Z   OWNER  

Given this schema: sql CREATE TABLE departments ( campus_name TEXT NOT NULL, dept_code TEXT NOT NULL, dept_name TEXT, PRIMARY KEY (campus_name, dept_code) ); CREATE TABLE courses ( course_code TEXT PRIMARY KEY, course_name TEXT, campus_name TEXT NOT NULL, dept_code TEXT NOT NULL, FOREIGN KEY (campus_name, dept_code) REFERENCES departments(campus_name, dept_code) ); The output of db["courses"].foreign_keys right now is: [ForeignKey(table='courses', column='campus_name', other_table='departments', other_column='campus_name'), ForeignKey(table='courses', column='dept_code', other_table='departments', other_column='dept_code')] Which suggests two normal foreign keys, not one compound foreign key.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/594/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1781530343 I_kwDOBm6k_c5qL_7n 2093 Proposal: Combine settings, metadata, static, etc. into a single `datasette.yaml` File asg017 15178711 open 0     8 2023-06-29T21:18:23Z 2023-09-11T20:19:32Z   CONTRIBUTOR  

Very often I get tripped up when trying to configure my Datasette instances. For example: if I want to change the port my app listen too, do I do that with a CLI flag, a --setting flag, inside metadata.json, or an env var? If I want to up the time limit of SQL statements, is that under metadata.json or a setting? Where does my plugin configuration go?

Normally I need to look it up in Datasette docs, and I quickly find my answer, but the number of places where "config" goes it overwhelming.

  • Flat CLI flags like --port, --host, --cors, etc.
  • --setting, like default_page_size, sql_time_limit_ms etc
  • Inside metadata.json, including plugin configuration

Typically my Datasette deploys are extremely long shell commands, with multiple --setting and other CLI flags.

Proposal: Consolidate all "config" into datasette.toml

I propose that we add a new datasette.toml that combines "settings", "metadata", and other common CLI flags like --port and --cors into a single file. It would be similar to "Cargo.toml" in Rust projects, "package.json" in Node projects, and "pyproject.toml" in Python, etc.

A sample of what it could look like:

```toml

"top level" configuration that are currently CLI flags on datasette serve

[config] port = 8020 host = "0.0.0.0" cors = true

replaces multiple --setting flags

[settings] base_url = "/app/datasette/" default_allow_sql = true sql_time_limit_ms = 3500

replaces metadata.json.

The contents of datasette-metadata.json could be defined in this file instead, but supporting separate files is nice (since those are easy to machine-generate)

[metadata] include="./datasette-metadata.json"

plugin-specific

[plugins] [plugins.datasette-auth-github] client_id = {env = "DATASETTE_AUTH_GITHUB_CLIENT_ID"} client_secret = {env = "GITHUB_CLIENT_SECRET"}

[plugins.datasette-cluster-map]

latitude_column = "lat" longitude_column = "lon" ```

Pros

  • Instead of multiple files and CLI flags, everything could be in one tidy file
  • Editing config in a separate file is easier than editing CLI flags, since you don't have to kill a process + edit a command every time
  • New users will know "just edit my datasette.toml instead of needing to learn metadata + settings + CLI flags
  • Better dev experience for multiple environment. For example, could have datasette -c datasette-dev.toml for local dev environments (enables SQL, debug plugins, long timeouts, etc.), and a datasette -c datasette-prod.toml for "production" (lower timeouts, less plugins, monitoring plugins, etc.)

Cons

  • Yet another config-management system. Now Datasette users will need to know about metadata, settings, CLI flags, and datasette.toml. However with enough documentation + announcements + examples, I think we can get ahead of it.
  • If toml is chosen, would need to add a toml parser for Python version <3.11
  • Multiple sources of config require priority. For example: Would --setting default_allow_sql off override the value inside [settings]? What about --port?

Other Notes

Toml

I chose toml over json because toml supports comments. I chose toml over yaml because Python 3.11 has builtin support for it. I also find toml easier to work with since it doesn't have the odd "gotchas" that YAML has ("ex 3.10 resolving to 3.1, Norway NO resolving to false, etc.). It also mimics pyproject.toml which is nice. Happy to change my mind about this however

Plugin config will be difficult

Plugin config is currently in metadata.json in two places:

  1. Top level, under "plugins.[plugin-name]". This fits well into datasette.toml as [plugins.plugin-name]
  2. Table level, under "databases.[db-name].tables.[table-name].plugins.[plugin-name]. This doesn't fit that well into datasette.toml, unless it's nested under [metadata]?

Extensions, static, one-off plugins?

We could also include equivalents of --plugins-dir, --static, and --load-extension into datasette.toml, but I'd imagine there's a few security concerns there to think through.

Explicitly list with plugins to use?

I believe Datasette by default will load all install plugins on startup, but maybe datasette.toml can specify a list of plugins to use? For example, a dev version of datasette.toml can specify datasette-pretty-traces, but the prod version can leave it out

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2093/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1876353656 I_kwDOBm6k_c5v1uJ4 2168 Consider a request/response wrapping hook slightly higher level than asgi_wrapper() simonw 9599 open 0     6 2023-08-31T21:42:04Z 2023-09-10T17:54:08Z   OWNER  

There's a long justification for why this might be needed here: - https://github.com/simonw/datasette-auth-tokens/issues/10#issuecomment-1701820001

Short version: it would be neat if it was possible to stash some data on the request object such that a later plugin/middleware-type-thing could use that to influence the final returned response - similar to the kinds of things you can do with Django middleware.

The asgi_wrapper() mechanism doesn't have access to the request or response objects - it gets scope and can mess around with receive and send, but those are pretty low-level primitives.

Since Datasette has well-defined request and response objects now it might be nice to have a middleware layer that can manipulate those directly.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2168/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1886771493 I_kwDOCGYnMM5wddkl 592 `table.transform()` should preserve `rowid` values simonw 9599 closed 0     6 2023-09-08T00:42:38Z 2023-09-10T17:46:41Z 2023-09-09T00:45:32Z OWNER  

I just spotted a bug when using https://datasette.io/plugins/datasette-configure-fts and https://datasette.io/plugins/datasette-edit-schema at the same time.

Steps to reproduce:

  • Configure FTS for a table, then run a test search
  • Edit the schema for that table and change the order of columns
  • Run the test search again

I got the wrong search results, which I think is because the _fts table pointed to the first table by rowid but those rowid values were entirely rewritten as a consequence of running table.transform() on the table.

Reconfiguring FTS on the table fixed the problem.

I think table.transform() should be able to preserve rowid values.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/592/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1886783150 PR_kwDOCGYnMM5Z1H1d 593 .transform() now preserves rowid values, refs #592 simonw 9599 closed 0     1 2023-09-08T01:02:28Z 2023-09-10T17:44:59Z 2023-09-09T00:45:30Z OWNER simonw/sqlite-utils/pulls/593

Refs: - #592

  • [x] Tests against weird shaped tables

I need to test that this works against:

  • rowid tables
  • Tables that have a column called rowid even though they are not rowid tables

:books: Documentation preview :books:: https://sqlite-utils--593.org.readthedocs.build/en/593/

sqlite-utils 140912432 pull    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/593/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1886791100 I_kwDOBm6k_c5wdiW8 2180 Plugin hook: `actors_from_ids()` simonw 9599 closed 0     6 2023-09-08T01:16:41Z 2023-09-10T17:44:14Z 2023-09-08T04:28:03Z OWNER  

In building Datasette Cloud we realized that a bunch of the features we are building need a way of resolving an actor ID to the actual actor, in order to display something more interesting than just an integer ID.

Social plugins in particular need this - comments by X, CSV uploaded by X, that kind of thing.

I think the solution is a new plugin hook: actors_from_ids(datasette, ids) which can return a list of actor dictionaries.

The default implementation can return [{"id": "..."}] for the IDs passed to it.

Pluggy has a firstresult=True option which is relevant here, since this is the first plugin hook we will have implemented where only one plugin should provide an answer.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2180/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1888477283 I_kwDOC8SPRc5wj-Bj 38 Run `rebuild_fts` after building the index simonw 9599 open 0     0 2023-09-08T23:17:45Z 2023-09-08T23:17:45Z   MEMBER  

In: - https://github.com/simonw/datasette.io/issues/152#issuecomment-1712323347

This turned out to be the fix:

bash dogsheep-beta index dogsheep-index.db templates/dogsheep-beta.yml sqlite-utils rebuild-fts dogsheep-index.db

dogsheep-beta 197431109 issue    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-beta/issues/38/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1874255116 I_kwDOBm6k_c5vtt0M 2164 Ability to only load a specific list of plugins simonw 9599 closed 0     1 2023-08-30T19:33:41Z 2023-09-08T04:35:46Z 2023-08-30T22:12:27Z OWNER  

I'm going to try and get this working through an environment variable, so that you can start Datasette and it will only load a subset of plugins including those that use the register_commands() hook.

Initial research on this: - https://github.com/pytest-dev/pluggy/issues/422

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2164/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1886812002 PR_kwDOBm6k_c5Z1N2L 2181 actors_from_ids plugin hook and datasette.actors_from_ids() method simonw 9599 closed 0     3 2023-09-08T01:51:07Z 2023-09-08T04:24:00Z 2023-09-08T04:23:59Z OWNER simonw/datasette/pulls/2181

Refs: - #2180

This plugin hook is feature complete - including documentation and tests.

I'm not going to land it in Datasette main until we've used it at least once though, which should happen promptly in development for Datasette Cloud.


:books: Documentation preview :books:: https://datasette--2181.org.readthedocs.build/en/2181/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2181/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
954546309 MDExOlB1bGxSZXF1ZXN0Njk4NDIzNjY3 8 Add Gmail takeout mbox import (v2) maxhawkins 28565 open 0     7 2021-07-28T07:05:32Z 2023-09-08T01:22:49Z   FIRST_TIME_CONTRIBUTOR dogsheep/google-takeout-to-sqlite/pulls/8

WIP

This PR builds on #5 to continue implementing gmail import support.

Building on @UtahDave's work, these commits add a few performance and bug fixes:

  • Decreased memory overhead for import by manually parsing mbox headers.
  • Fixed error where some messages in the mbox would yield a row with NULL in all columns.

I will send more commits to fix any errors I encounter as I run the importer on my personal takeout data.

google-takeout-to-sqlite 206649770 pull    
{
    "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/8/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1884330740 PR_kwDOBm6k_c5ZszDF 2174 Use $DATASETTE_INTERNAL in absence of --internal asg017 15178711 open 0     3 2023-09-06T16:07:15Z 2023-09-08T00:46:13Z   CONTRIBUTOR simonw/datasette/pulls/2174

refs 2157, specifically this comment

Passing in --internal my_internal.db over and over again can get repetitive.

This PR adds a new configurable env variable DATASETTE_INTERNAL_DB_PATH. If it's defined, then it takes place as the path to the internal database. Users can still overwrite this behavior by passing in their own --internal internal.db flag.

In draft mode for now, needs tests and documentation.

Side note: Maybe we can have a sections in the docs that lists all the "configuration environment variables" that Datasette respects? I did a quick grep and found:

  • DATASETTE_LOAD_PLUGINS
  • DATASETTE_SECRETS

:books: Documentation preview :books:: https://datasette--2174.org.readthedocs.build/en/2174/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2174/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1886350562 I_kwDOBm6k_c5wb2zi 2178 Don't show foreign key links to tables the user cannot access simonw 9599 closed 0     5 2023-09-07T17:56:41Z 2023-09-07T23:28:27Z 2023-09-07T23:28:27Z OWNER  

Spotted this problem while working on this plugin: - https://github.com/simonw/datasette-public

It's possible to make a table public to any users - but then you may end up with situations like this:

That table is public, but the foreign key links go to tables that are NOT public.

We're also leaking the names of the values in those private tables here, which we shouldn't do. So this is a tiny bit of an information leak.

Since this only affects people who have configured a table to be public that has foreign keys to a table that is private I don't think this is worth issuing a vulnerability report about - I very much doubt anyone is running Datasette configured in a way that could result in problems because of this.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2178/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1886649402 I_kwDOBm6k_c5wc_w6 2179 Flaky test: test_hidden_sqlite_stat1_table simonw 9599 closed 0     0 2023-09-07T22:48:43Z 2023-09-07T22:51:19Z 2023-09-07T22:51:19Z OWNER  

This test here: https://github.com/simonw/datasette/blob/fbcb103c0cb6668018ace539a01a6a1f156e8d6a/tests/test_api.py#L1011-L1020

It failed for me like this:

E AssertionError: assert [('normal', False), ('sqlite_stat1', True), ('sqlite_stat4', True)] in ([('normal', False), ('sqlite_stat1', True)],)

Looks like some builds of SQLite include a sqlite_stat4 table.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2179/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1010112818 I_kwDOBm6k_c48NRky 1479 Win32 "used by another process" error with datasette publish kirajano 76450761 open 0     7 2021-09-28T19:12:00Z 2023-09-07T02:14:16Z   NONE  

I unfortunately was not successful to deploy to fly.io. Please see the details above of the three scenarios that I took. I am also new to datasette.

Failed to deploy. Attaching logs: 1. Tried with an app created via flyctl apps create frosty-fog-8565 and the ran datasette publish fly covid.db --app frosty-fog-8565 ``` Deploying frosty-fog-8565 ==> Validating app configuration --> Validating app configuration done Services TCP 80/443 ⇢ 8080

Error error connecting to docker: An unknown error occured.

Traceback (most recent call last): File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly__init__.py", line 156, in fly "--remote-only", File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpgcm8cz66\frosty-fog-8565' ```

  1. Tried also with an app that gets autogenerate when running flyctl launch. This also generates the .toml file. Ran then datasette publish fly covid.db --app dark-feather-168 but different error now ```Deploying dark-feather-168 ==> Validating app configuration

Error not possible to validate configuration: server returned Post "https://api.fly.io/graphql": unexpected EOF

Traceback (most recent call last): File "c:\users\grott\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec) exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette_publish_fly__init__.py", line 156, in fly "--remote-only", File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpnoyewcre\dark-feather-168' ```

These are also the contents of the generated .toml file in 2 scenario:

```

fly.toml file generated for dark-feather-168 on 2021-09-28T20:35:44+02:00

app = "dark-feather-168"

kill_signal = "SIGINT" kill_timeout = 5 processes = []

[env]

[experimental] allowed_public_ports = [] auto_rollback = true

[[services]] http_checks = [] internal_port = 8080 processes = ["app"] protocol = "tcp" script_checks = []

[services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections"

[[services.ports]] handlers = ["http"] port = 80

[[services.ports]] handlers = ["tls", "http"] port = 443

[[services.tcp_checks]] grace_period = "1s" interval = "15s" restart_limit = 6 timeout = "2s" ```

  1. But also trying datasette package covid.db to create a local DOCKERFILE to later try to push it via flyctl deploy fails as well.

```[+] Building 147.3s (11/11) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 396B 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/python:3.8 4.7s => [auth] library/python:pull token for registry-1.docker.io 0.0s => [internal] load build context 0.1s => => transferring context: 82.37kB 0.0s => [1/5] FROM docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3 108.3s => => resolve docker.io/library/python:3.8@sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5 0.0s => => sha256:56182bcdf4d4283aa1f46944b4ef7ac881e28b4d5526720a4e9ba03a4730846a 2.22kB / 2.22kB 0.0s => => sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 54.93MB / 54.93MB 21.6s => => sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 10.87MB / 10.87MB 15.5s => => sha256:530de807b46a11734e2587a784573c12c5034f2f14025f838589e6c0e3b5c5b6 1.86kB / 1.86kB 0.0s => => sha256:ff08f08727e50193dcf499afc30594c47e70cc96f6fcfd1a01240524624264d0 8.65kB / 8.65kB 0.0s => => sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 5.15MB / 5.15MB 6.4s => => sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 54.57MB / 54.57MB 37.7s => => sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 196.45MB / 196.45MB 82.3s => => sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 6.29MB / 6.29MB 26.6s => => extracting sha256:955615a668ce169f8a1443fc6b6e6215f43fe0babfb4790712a2d3171f34d366 5.4s => => sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 16.81MB / 16.81MB 40.5s => => extracting sha256:2756ef5f69a5190f4308619e0f446d95f5515eef4a814dbad0bcebbbbc7b25a8 0.6s => => extracting sha256:911ea9f2bd51e53a455297e0631e18a72a86d7e2c8e1807176e80f991bde5d64 0.6s => => sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 232B / 232B 40.1s => => extracting sha256:27b0a22ee906271a6ce9ddd1754fdd7d3b59078e0b57b6cc054c7ed7ac301587 5.8s => => sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 2.35MB / 2.35MB 41.8s => => extracting sha256:8584d51a9262f9a3a436dea09ba40fa50f85802018f9bd299eee1bf538481077 18.8s => => extracting sha256:524774b7d3638702fe9ae0ea3fcfb81b027dfd75cc2fc14f0119e764b9543d58 1.2s => => extracting sha256:9460f6b75036e38367e2f27bb15e85777c5d6cd52ad168741c9566186415aa26 2.9s => => extracting sha256:9bc548096c181514aa1253966a330134d939496027f92f57ab376cd236eb280b 0.0s => => extracting sha256:1d87379b86b89fd3b8bb1621128f00c8f962756e6aaaed264ec38db733273543 0.8s => [2/5] COPY . /app 2.3s => [3/5] WORKDIR /app 0.2s => [4/5] RUN pip install -U datasette 26.9s => [5/5] RUN datasette inspect covid.db --inspect-file inspect-data.json 3.1s => exporting to image 1.2s => => exporting layers 1.2s => => writing image sha256:b5db0c205cd3454c21fbb00ecf6043f261540bcf91c2dfc36d418f1a23a75d7a 0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Traceback (most recent call last): "main", mod_spec) File "c:\users\grott\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\grott\Anaconda3\Scripts\datasette.exe__main__.py", line 7, in <module> File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 829, in call return self.main(args, kwargs) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 782, in main rv = self.invoke(ctx) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1259, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "c:\users\grott\anaconda3\lib\site-packages\click\core.py", line 610, in invoke return callback(args, **kwargs) File "c:\users\grott\anaconda3\lib\site-packages\datasette\cli.py", line 283, in package call(args) File "c:\users\grott\anaconda3\lib\contextlib.py", line 119, in exit next(self.gen) File "c:\users\grott\anaconda3\lib\site-packages\datasette\utils__init__.py", line 451, in temporary_docker_directory tmp.cleanup() File "c:\users\grott\anaconda3\lib\tempfile.py", line 811, in cleanup _shutil.rmtree(self.name) File "c:\users\grott\anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 395, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "c:\users\grott\anaconda3\lib\shutil.py", line 404, in _rmtree_unsafe onerror(os.rmdir, path, sys.exc_info()) File "c:\users\grott\anaconda3\lib\shutil.py", line 402, in _rmtree_unsafe os.rmdir(path) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\grott\AppData\Local\Temp\tmpkb27qid3\datasette'```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1479/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1884499674 PR_kwDODFE5qs5ZtYMc 13 use poetry for packages, asdf for versioning, and gh actions for ci iloveitaly 150855 open 0     0 2023-09-06T17:59:16Z 2023-09-06T17:59:16Z   FIRST_TIME_CONTRIBUTOR dogsheep/google-takeout-to-sqlite/pulls/13
  • build: use poetry for package management, asdf for python version
  • build: cleanup poetry config, add keywords, ignore dist
  • ci: migrate circleci to gh actions
  • fix: dup method definition
google-takeout-to-sqlite 206649770 pull    
{
    "url": "https://api.github.com/repos/dogsheep/google-takeout-to-sqlite/issues/13/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1884408624 I_kwDOBm6k_c5wUcsw 2177 Move schema tables from _internal to _catalog simonw 9599 open 0     1 2023-09-06T16:58:33Z 2023-09-06T17:04:30Z   OWNER  

This came up in discussion over: - https://github.com/simonw/datasette/pull/2174

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2177/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1875519316 PR_kwDOBm6k_c5ZPO5y 2166 Bump the python-packages group with 1 update dependabot[bot] 49699333 closed 0     1 2023-08-31T13:19:57Z 2023-09-06T16:34:32Z 2023-09-06T16:34:31Z CONTRIBUTOR simonw/datasette/pulls/2166

Bumps the python-packages group with 1 update: sphinx.

Release notes

Sourced from sphinx's releases.

Sphinx 7.2.5

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Changelog

Sourced from sphinx's changelog.

Release 7.2.5 (released Aug 30, 2023)

Bugs fixed

  • #11645: Fix a regression preventing autodoc from importing modules within packages that make use of if typing.TYPE_CHECKING: to guard circular imports needed by type checkers. Patch by Matt Wozniski.
  • #11634: Fixed inheritance diagram relative link resolution for sibling files in a subdirectory. Patch by Albert Shih.
  • #11659: Allow ?config=... in :confval:mathjax_path.
  • #11654: autodoc: Fail with a more descriptive error message when an object claims to be an instance of type, but is not a class. Patch by James Braza.
  • 11620: Cease emitting :event:source-read events for files read via the :dudir:include directive.
  • 11620: Add a new :event:include-read for observing and transforming the content of included files via the :dudir:include directive.
  • #11627: Restore support for copyright lines of the form YYYY when SOURCE_DATE_EPOCH is set.
Commits
  • fcc3899 Bump to 7.2.5 final
  • 2a631f9 Restore support for YYYY copyright lines
  • 2730cc3 Remove double spaces in CHANGES
  • ff18318 Add an 'include-read' event (#11657)
  • 74329d9 Fail better in ExceptionDocumenter.can_document_member (#11660)
  • 7d046a8 Allow ?config=... in mathjax_path (#11659)
  • 4692208 Fix two relative link bugs in inheritance diagrams (#11634)
  • ca0fc7a Add git .mailmap file
  • 8248be3 autodoc: Reset sys.modules on partial import failure (#11645)
  • e494baa Recommend correct replacement names for deprecated APIs (#11655)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2166.org.readthedocs.build/en/2166/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2166/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1884333600 PR_kwDOBm6k_c5Zszqk 2175 Test against Python 3.12 preview simonw 9599 closed 0     0 2023-09-06T16:09:05Z 2023-09-06T16:16:28Z 2023-09-06T16:16:27Z OWNER simonw/datasette/pulls/2175

https://dev.to/hugovk/help-test-python-312-beta-1508/


:books: Documentation preview :books:: https://datasette--2175.org.readthedocs.build/en/2175/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2175/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1883055640 PR_kwDOBm6k_c5ZociX 2173 click-default-group>=1.2.3 simonw 9599 closed 0     3 2023-09-06T02:33:28Z 2023-09-06T02:50:10Z 2023-09-06T02:50:10Z OWNER simonw/datasette/pulls/2173

Now available as a wheel: - https://github.com/click-contrib/click-default-group/issues/21


:books: Documentation preview :books:: https://datasette--2173.org.readthedocs.build/en/2173/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2173/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
336464733 MDU6SXNzdWUzMzY0NjQ3MzM= 328 Installation instructions, including how to use the docker image simonw 9599 closed 0     4 2018-06-28T03:59:33Z 2023-09-05T14:10:39Z 2018-06-28T04:02:10Z OWNER  
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/328/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1880968405 PR_kwDOJHON9s5ZhYny 14 fix: fix the problem of Chinese character garbling barretlee 2698003 open 0     0 2023-09-04T23:48:28Z 2023-09-04T23:48:28Z   FIRST_TIME_CONTRIBUTOR dogsheep/apple-notes-to-sqlite/pulls/14
  1. The code uses two different ways of writing encoding formats, mac_roman and macroman. It is uncertain whether there are any typo errors.
  2. When there are Chinese characters in the content, exporting it results in garbled code. Changing it to utf8 can fix the issue.
apple-notes-to-sqlite 611552758 pull    
{
    "url": "https://api.github.com/repos/dogsheep/apple-notes-to-sqlite/issues/14/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1879214365 I_kwDOCGYnMM5wAokd 590 Ability to tell if a Database is an in-memory one simonw 9599 open 0     1 2023-09-03T19:50:15Z 2023-09-03T19:50:36Z   OWNER  

Currently the constructor accepts memory=True or memory_name=... and uses those to create a connection, but does not record what those values were:

https://github.com/simonw/sqlite-utils/blob/1260bdc7bfe31c36c272572c6389125f8de6ef71/sqlite_utils/db.py#L307-L349

This makes it hard to tell if a database object is to an in-memory or a file-based database, which is sometimes useful to know.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/590/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1879209560 I_kwDOCGYnMM5wAnZY 589 Mechanism for de-registering registered SQL functions simonw 9599 open 0     3 2023-09-03T19:32:39Z 2023-09-03T19:36:34Z   OWNER  

I used a custom SQL function in a migration script and then realized that it should be de-registered before the end of the script to avoid leaking into the calling code.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/589/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1292370469 I_kwDOBm6k_c5NCAIl 1765 Document plugins providing new plugin hook- simonw 9599 closed 0     1 2022-07-03T17:05:14Z 2023-08-31T23:08:24Z 2023-08-31T23:06:31Z OWNER  

I've used this pattern twice now: https://til.simonwillison.net/datasette/register-new-plugin-hooks - in datasette-graphql and datasette-low-disk-space-hook. I should describe the pattern on https://docs.datasette.io/en/stable/writing_plugins.html

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1765/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1876407598 I_kwDOBm6k_c5v17Uu 2169 execute-sql on a database should imply view-database/view-permission simonw 9599 closed 0     0 2023-08-31T22:45:56Z 2023-08-31T22:46:28Z 2023-08-31T22:46:28Z OWNER  

I noticed that a token with execute-sql permission alone did not work, because it was not allowed to view the instance of the database.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2169/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1875739055 I_kwDOBm6k_c5vzYGv 2167 Document return type of await ds.permission_allowed() simonw 9599 open 0     0 2023-08-31T15:14:23Z 2023-08-31T15:14:23Z   OWNER  

The return type isn't documented here: https://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/docs/internals.rst#L327-L350

On inspecting the code I'm not 100% sure if it's possible for this. method to return None, or if it can only return True or False. Need to confirm that.

https://github.com/simonw/datasette/blob/4c3ef033110407f3b3dbce501659d523724985e0/datasette/app.py#L822C15-L853

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2167/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1871935751 I_kwDOD079W85vk3kH 40 ImportError: cannot import name 'formatargspec' from 'inspect' hosslikw 36752421 closed 0     0 2023-08-29T15:36:31Z 2023-08-31T03:18:07Z 2023-08-31T03:18:06Z NONE  

I get the following error when running "pip3 install dogsheep-photos" " from inspect import ismethod, isclass, formatargspec ImportError: cannot import name 'formatargspec' from 'inspect' (/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?"

Python 3.12.0rc1 sqlite 3.43.0 datasette, version 0.64.3

dogsheep-photos 256834907 issue    
{
    "url": "https://api.github.com/repos/dogsheep/dogsheep-photos/issues/40/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1865869205 I_kwDOBm6k_c5vNueV 2157 Proposal: Make the `_internal` database persistent, customizable, and hidden asg017 15178711 open 0     3 2023-08-24T20:54:29Z 2023-08-31T02:45:56Z   CONTRIBUTOR  

The current _internal database is used by Datasette core to cache info about databases/tables/columns/foreign keys of databases in a Datasette instance. It's a temporary database created at startup, that can only be seen by the root user. See an example _internal DB here, after logging in as root.

The current _internal database has a few rough edges:

  • It's part of datasette.databases, so many plugins have to specifically exclude _internal from their queries examples here
  • It's only used by Datasette core and can't be used by plugins or 3rd parties
  • It's created from scratch at startup and stored in memory. Why is fine, the performance is great, but persistent storage would be nice.

Additionally, it would be really nice if plugins could use this _internal database to store their own configuration, secrets, and settings. For example:

  • datasette-auth-tokens creates a _datasette_auth_tokens table to store auth token metadata. This could be moved into the _internal database to avoid writing to the gues database
  • datasette-socrata creates a socrata_imports table, which also can be in _internal
  • datasette-upload-csvs creates a _csv_progress_ table, which can be in _internal
  • datasette-write-ui wants to have the ability for users to toggle whether a table appears editable, which can be either in datasette.yaml or on-the-fly by storing config in _internal

In general, these are specific features that Datasette plugins would have access to if there was a central internal database they could read/write to:

  • Dynamic configuration. Changing the datasette.yaml file works, but can be tedious to restart the server every time. Plugins can define their own configuration table in _internal, and could read/write to it to store configuration based on user actions (cell menu click, API access, etc.)
  • Caching. If a plugin or Datasette Core needs to cache some expensive computation, they can store it inside _internal (possibly as a temporary table) instead of managing their own caching solution.
  • Audit logs. If a plugin performs some sensitive operations, they can log usage info to _internal for others to audit later.
  • Long running process status. Many plugins (datasette-upload-csvs, datasette-litestream, datasette-socrata) perform tasks that run for a really long time, and want to give continue status updates to the user. They can store this info inside_internal
  • Safer authentication. Passwords and authentication plugins usually store credentials/hashed secrets in configuration files or environment variables, which can be difficult to handle. Now, they can store them in _internal

Proposal

  • We remove _internal from datasette.databases property.
  • We add new datasette.get_internal_db() method that returns the _internal database, for plugins to use
  • We add a new --internal internal.db flag. If provided, then the _internal DB will be sourced from that file, and further updates will be persisted to that file (instead of an in-memory database)
  • When creating internal.db, create a new _datasette_internal table to mark it a an "datasette internal database"
  • In datasette serve, we check for the existence of the _datasette_internal table. If it exists, we assume the user provided that file in error and raise an error. This is to limit the chance that someone accidentally publishes their internal database to the internet. We could optionally add a --unsafe-allow-internal flag (or database plugin) that allows someone to do this if they really want to.

New features unlocked with this

These features don't really need a standardized _internal table per-say (plugins could currently configure their own long-time storage features if they really wanted to), but it would make it much simpler to create these kinds of features with a persistent application database.

  • datasette-comments : A plugin for commenting on rows or specific values in a database. Comment contents + threads + email notification info can be stored in _internal
  • Bookmarks: "Bookmarking" an SQL query could be stored in _internal, or a URL link shortener
  • Webhooks: If a plugin wants to either consume a webhook or create a new one, they can store hashed credentials/API endpoints in _internal
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2157/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
742041667 MDU6SXNzdWU3NDIwNDE2Njc= 1092 Make cascading permission checks available to plugins simonw 9599 closed 0     1 2020-11-13T01:02:55Z 2023-08-30T22:17:42Z 2023-08-30T22:17:41Z OWNER  

The BaseView class has a method for cascading permission checks, but it's not easily accessible to plugins.

https://github.com/simonw/datasette/blob/5eb8e9bf250b26e30b017d39a392c33973997656/datasette/views/base.py#L75-L99

This leaves plugins like datasette-graphql having to implement their own versions of this logic, which is bad: https://github.com/simonw/datasette-graphql/issues/65

First check view-database - if that says False then disallow access, if it says True then allow access. If it says None check view-instance.

This should become a supported API that plugins are encouraged to use.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1092/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
594237015 MDU6SXNzdWU1OTQyMzcwMTU= 718 Plugin idea: datasette-redirects simonw 9599 open 0     0 2020-04-05T03:41:38Z 2023-08-30T22:17:31Z   OWNER  

I just had to write a one-off custom plugin to redirect niche-musems.com to www.niche-museums.com (https://github.com/simonw/museums/issues/21) - it would be great if this kind of thing could be handled by a configurable plugin.

https://github.com/simonw/museums/blob/6b1faf00c463b2228860d4d62d104b11935e01b1/plugins/redirect_www.py

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/718/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  reopened
787098146 MDU6SXNzdWU3ODcwOTgxNDY= 1190 `datasette publish upload` mechanism for uploading databases to an existing Datasette instance tomershvueli 1024355 closed 0     3 2021-01-15T18:18:42Z 2023-08-30T22:16:39Z 2023-08-30T22:16:38Z NONE  

If I have a self-hosted instance of Datasette up and running, I'd like to be able to the use the CLI to publish databases to that instance, not only Google or Heroku. Ideally there'd be a url parameter or something similar to which one could point the publish command to their instance.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1190/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1874327336 PR_kwDOBm6k_c5ZLMSe 2165 DATASETTE_LOAD_PLUGINS environment variable for loading specific plugins simonw 9599 closed 0     6 2023-08-30T20:33:30Z 2023-08-30T22:12:25Z 2023-08-30T22:12:25Z OWNER simonw/datasette/pulls/2165
  • 2164

TODO:

  • [x] Automated tests
  • [ ] Documentation
  • [x] Make sure DATASETTE_LOAD_PLUGINS='' works for loading zero plugins
datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2165/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1872043170 I_kwDOBm6k_c5vlRyi 2163 Rename core_X to catalog_X in the internals simonw 9599 closed 0     1 2023-08-29T16:45:00Z 2023-08-29T17:01:31Z 2023-08-29T17:01:31Z OWNER  

Discussed with Alex this morning. We think the American spelling is fine here (it's shorter than catalogue) and that it's a slightly less lazy name than core_.

Follows: - https://github.com/simonw/datasette/issues/2157

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2163/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1805076818 I_kwDOBm6k_c5rl0lS 2102 API tokens with view-table but not view-database/view-instance cannot access the table simonw 9599 closed 0 simonw 9599   20 2023-07-14T15:34:27Z 2023-08-29T16:32:36Z 2023-08-29T16:32:35Z OWNER  

Spotted a problem while working on this: if you grant a token access to view table for a specific table but don't also grant view database and view instance permissions, that token is useless.

This was a deliberate design decision in Datasette - it's documented on https://docs.datasette.io/en/1.0a2/authentication.html#access-permissions-in-metadata

If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries.

I'm now second-guessing if this was a good decision.

Originally posted by @simonw in https://github.com/simonw/datasette-auth-tokens/issues/7#issuecomment-1636031702

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2102/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1865281760 PR_kwDOBm6k_c5Ys3C5 2154 Cascade for restricted token view-table/view-database/view-instance operations simonw 9599 closed 0     8 2023-08-24T14:24:23Z 2023-08-29T16:32:35Z 2023-08-29T16:32:34Z OWNER simonw/datasette/pulls/2154

Refs: - #2102

Also includes a prototype implementation of --actor option which I'm using for testing this, from: - #2153


:books: Documentation preview :books:: https://datasette--2154.org.readthedocs.build/en/2154/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2154/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1870672704 PR_kwDOBm6k_c5Y-7Em 2162 Add new `--internal internal.db` option, deprecate legacy `_internal` database asg017 15178711 closed 0     4 2023-08-29T00:05:07Z 2023-08-29T03:24:23Z 2023-08-29T03:24:23Z CONTRIBUTOR simonw/datasette/pulls/2162

refs #2157

This PR adds a new --internal option to datasette serve. If provided, it is the path to a persistent internal database that Datasette core and Datasette plugins can use to store data, as discussed in the proposal issue.

This PR also removes and deprecates the previous in-memory _internal database. Those tables now appear in the internal database, with core_ prefixes (ex tables in _internal is now core_tables in internal).

A note on the new core_ tables

However, one important notes about those new core_ tables: If a --internal DB is passed in, that means those core_ tables will persist across multiple Datasette instances. This wasn't the case before, since _internal was always an in-memory database created from scratch.

I tried to put those core_ tables as TEMP tables - after all, there's always one 1 internal DB connection at a time, so I figured it would work. But, since we use the Database() wrapper for the internal DB, it has two separate connections: a default read-only connection and a write connection that is created when a write operation occurs. Which meant the TEMP tables would be created by the write connection, but not available in the read-only connection.

So I had a brillant idea: Attach an in-memory named database with cache=shared, and create those tables there!

sql ATTACH DATABASE 'file:datasette_internal_core?mode=memory&cache=shared' AS core;

We'd run this on both the read-only connection and the write-only connection. That way, those tables would stay in memory, they'd communicate with the cache=shared feature, and we'd be good to go.

However, I couldn't find an easy way to run a ATTACH DATABASE command on the read-only query.

Using Database() as a wrapper for the internal DB is pretty limiting - it's meant for Datasette "data" databases, where we want multiple readers and possibly 1 write connection at a time. But the internal database doesn't really require that kind of support - I think we could get away with a single read/write connection, but it seemed like too big of a rabbithole to go through now.


:books: Documentation preview :books:: https://datasette--2162.org.readthedocs.build/en/2162/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2162/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1869807874 PR_kwDOBm6k_c5Y8AN0 2160 Bump sphinx, furo, blacken-docs dependencies dependabot[bot] 49699333 closed 0     5 2023-08-28T13:49:31Z 2023-08-29T00:38:33Z 2023-08-29T00:38:32Z CONTRIBUTOR simonw/datasette/pulls/2160

Bumps the python-packages group with 3 updates: sphinx, furo and blacken-docs.

Updates sphinx from 7.1.2 to 7.2.4

Release notes

Sourced from sphinx's releases.

Sphinx 7.2.4

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Sphinx 7.2.3

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Sphinx 7.2.2

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Sphinx 7.2.1

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Sphinx 7.2.0

Changelog: https://www.sphinx-doc.org/en/master/changes.html

Changelog

Sourced from sphinx's changelog.

Release 7.2.4 (released Aug 28, 2023)

Bugs fixed

  • #11618: Fix a regression in the MoveModuleTargets transform, introduced in #10478 (#9662).
  • #11649: linkcheck: Resolve hanging tests for timezones west of London and incorrect conversion from UTC to offsets from the UNIX epoch. Patch by Dmitry Shachnev and Adam Turner.

Release 7.2.3 (released Aug 23, 2023)

Dependencies

  • #11576: Require sphinxcontrib-serializinghtml 1.1.9.

Bugs fixed

  • Fix regression in autodoc.Documenter.parse_name().
  • Fix regression in JSON serialisation.
  • #11543: autodoc: Support positional-only parameters in classmethod methods when autodoc_preserve_defaults is True.
  • Restore support string methods on path objects. This is deprecated and will be removed in Sphinx 8. Use :py:func:os.fspath to convert :py:class:~pathlib.Path objects to strings, or :py:class:~pathlib.Path's methods to work with path objects.

Release 7.2.2 (released Aug 17, 2023)

Bugs fixed

  • Fix the signature of the StateMachine.insert_input() patch, for when calling with keyword arguments.
  • Fixed membership testing (in) for the :py:class:str interface of the asset classes (_CascadingStyleSheet and _JavaScript), which several extensions relied upon.
  • Fixed a type error in SingleFileHTMLBuilder._get_local_toctree, includehidden may be passed as a string or a boolean.
  • Fix :noindex: for PyModule and JSModule.

Release 7.2.1 (released Aug 17, 2023)

... (truncated)

Commits
  • 3256f1f Bump to 7.2.4 final
  • 2f025a4 linkcheck: Fix conversion from UTC time to the UNIX epoch (#11649)
  • 1567281 autodoc: Fix UnboundLocalError in filter_members (#11651)
  • 5e88b9f Fix the MoveModuleTargets transform (#11647)
  • 694fcee Fix markup in CHANGES (#11639)
  • c503c90 Improve pathlib type annotations (#11646)
  • bf339b1 Bump version
  • 2f6ea14 Bump to 7.2.3 final
  • 511e407 Implement bool() for string paths
  • 494de73 Implement hash() for string paths
  • Additional commits viewable in compare view


Updates furo from 2023.7.26 to 2023.8.19

Changelog

Sourced from furo's changelog.

Changelog

2023.08.19 -- Xenolithic Xanadu

  • Fix missing search context with Sphinx 7.2, for dirhtml builds.
  • Drop support for Python 3.7.
  • Present configuration errors in a better format -- thanks @​AA-Turner!
  • Bump require_sphinx() to Sphinx 6.0, in line with dependency changes in Unassuming Ultramarine.

2023.08.17 -- Wonderous White

  • Fix compatiblity with Sphinx 7.2.0 and 7.2.1.

2023.07.26 -- Vigilant Volt

  • Fix compatiblity with Sphinx 7.1.
  • Improve how content overflow is handled.
  • Improve how literal blocks containing inline code are handled.

2023.05.20 -- Unassuming Ultramarine

  • ✨ Add support for Sphinx 7.
  • Drop support for Sphinx 5.
  • Improve the screen-reader label for sidebar collapse.
  • Make it easier to create derived themes from Furo.
  • Bump all JS dependencies (NodeJS and npm packages).

2023.03.27 -- Tasty Tangerine

  • Regenerate with newer version of sphinx-theme-builder, to fix RECORD hashes.
  • Add missing class to Font Awesome examples

2023.03.23 -- Sassy Saffron

  • Update Python version classifiers.
  • Increase the icon size in mobile header.
  • Increase admonition title bg opacity.
  • Change the default API background to transparent.
  • Transition the API background change.

... (truncated)

Commits
  • 0766bb2 Prepare release: 2023.08.19
  • 807d73c Update changelog
  • 364b261 Accomodate for the required data-content_root for search
  • 0d38bc6 Simplify retrieval of pygments_dark_style value (#699)
  • 3631ffc Use sphinx.errors.ConfigError (#697)
  • d2e2448 Remove duplicate HTML builder check (#698)
  • 7b4f130 Drop Python 3.7 (#701)
  • e322b71 Remove pointless assert (#702)
  • ee2097a Bump require_sphinx() to Sphinx 6.0 (#700)
  • c1ff10b Back to development
  • Additional commits viewable in compare view


Updates blacken-docs from 1.15.0 to 1.16.0

Changelog

Sourced from blacken-docs's changelog.

1.16.0 (2023-08-16)

  • Allow Markdown fence options.

    Thanks to initial work from Matthew Anderson in PR [#246](https://github.com/asottile/blacken-docs/issues/246) <https://github.com/adamchainz/blacken-docs/pull/246>__.

  • Expand Markdown detection to all Python language names from Pygments: py, sage, python3, py3, and numpy.

  • Preserve leading whitespace lines in reStructuredText code blocks.

    Thanks to Julianus Pfeuffer for the report in Issue [#217](https://github.com/asottile/blacken-docs/issues/217) <https://github.com/adamchainz/blacken-docs/issues/217>__.

  • Use exit code 2 to indicate errors from Black, whilst exit code 1 remains for “files have been formatted”.

    Thanks to Julianus Pfeuffer for the report in Issue [#218](https://github.com/asottile/blacken-docs/issues/218) <https://github.com/adamchainz/blacken-docs/issues/218>__.

  • Support passing the --preview option through to Black, to select the future style.

  • Remove language_version from .pre-commit-hooks.yaml. This change allows default_language_version in ``.pre-commit-config.yaml` to take precedence.

    Thanks to Aneesh Agrawal in PR [#258](https://github.com/asottile/blacken-docs/issues/258) <https://github.com/adamchainz/blacken-docs/pull/258>__.

Commits
  • 960ead2 Version 1.16.0
  • 8f0ed18 Support passing --preview through to Black (#273)
  • 4eb4e4c Tweak changelog note
  • 6c7450c Use exit code 2 to indicate errors (#272)
  • 99dfc8d Preserve leading whitespace lines in rST (#271)
  • 94465e8 Reformat markdown tests with dedent() (#270)
  • 7cd5f30 Use .md in glob example
  • f97e569 Document applying to many files (#269)
  • ae612b0 Expand Markdown detection to all Python language names (#268)
  • da9b455 Replace NamedTuple with plain class (#267)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore <dependency name> major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore <dependency name> minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore <dependency name>` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore <dependency name>` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore <dependency name> <ignore condition>` will remove the ignore condition of the specified dependency and ignore conditions

:books: Documentation preview :books:: https://datasette--2160.org.readthedocs.build/en/2160/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2160/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 411.318ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows