home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

7 rows where repo = 107914493, state = "closed" and user = 15178711 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 6
  • issue 1

state 1

  • closed · 7 ✖

repo 1

  • datasette · 7 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1901483874 PR_kwDOBm6k_c5amULw 2190 Raise an exception if a "plugins" block exists in metadata.json asg017 15178711 closed 0     5 2023-09-18T18:08:56Z 2023-10-12T16:20:51Z 2023-10-12T16:20:51Z CONTRIBUTOR simonw/datasette/pulls/2190

refs #2183 #2093

From this comment in #2183: If a "plugins" block appears in metadata.json, it means that a user hasn't migrated over their plugin configuration from metadata.json to datasette.yaml, which is a breaking change in Datasette 1.0.

This PR will ensure that an error is raised whenever that happens.


:books: Documentation preview :books:: https://datasette--2190.org.readthedocs.build/en/2190/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2190/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1901768721 PR_kwDOBm6k_c5anSg5 2191 Move `permissions`, `allow` blocks, canned queries and more out of `metadata.yaml` and into `datasette.yaml` asg017 15178711 closed 0     4 2023-09-18T21:21:16Z 2023-10-12T16:16:38Z 2023-10-12T16:16:38Z CONTRIBUTOR simonw/datasette/pulls/2191

The PR moves the following fields from metadata.yaml to datasette.yaml:

permissions allow allow_sql queries extra_css_urls extra_js_urls

This is a significant breaking change that users will need to upgrade their metadata.yaml files for. But the format/locations are similar to the previous version, so it shouldn't be too difficult to upgrade.

One note: I'm still working on the Configuration docs, specifically the "reference" section. Though it's pretty small, the rest of read to review

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2191/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1891212159 PR_kwDOBm6k_c5aD33C 2183 `datasette.yaml` plugin support asg017 15178711 closed 0     4 2023-09-11T20:26:04Z 2023-09-13T21:06:25Z 2023-09-13T21:06:25Z CONTRIBUTOR simonw/datasette/pulls/2183

Part of #2093

In #2149 , we ported over "settings.json" into the new datasette.yaml config file, with a top-level "settings" key. This PR ports over plugin configuration into top-level "plugins" key, as well as nested database/table plugin config.

From now on, no plugin-related configuration is allowed in metadata.yaml, and must be in datasette.yaml in this new format. This is a pretty significant breaking change. Thankfully, you should be able to copy-paste your legacy plugin key/values into the new datasette.yaml format.

An example of what datasette.yaml would look like with this new plugin config:

```yaml

plugins: datasette-my-plugin: config_key: value

databases: fixtures: plugins: datasette-my-plugin: config_key: fixtures-db-value tables: students: plugins: datasette-my-plugin: config_key: fixtures-students-table-value

```

As an additional benefit, this now works with the new -s flag:

bash datasette --memory -s 'plugins.datasette-my-plugin.config_key' new_value

Marked as a "Draft" right now until I add better documentation. We also should have a plan for the next alpha release to document and publicize this change, especially for plugin authors (since their docs will have to change to say datasette.yaml instead of metadata.yaml


:books: Documentation preview :books:: https://datasette--2183.org.readthedocs.build/en/2183/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2183/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1870672704 PR_kwDOBm6k_c5Y-7Em 2162 Add new `--internal internal.db` option, deprecate legacy `_internal` database asg017 15178711 closed 0     4 2023-08-29T00:05:07Z 2023-08-29T03:24:23Z 2023-08-29T03:24:23Z CONTRIBUTOR simonw/datasette/pulls/2162

refs #2157

This PR adds a new --internal option to datasette serve. If provided, it is the path to a persistent internal database that Datasette core and Datasette plugins can use to store data, as discussed in the proposal issue.

This PR also removes and deprecates the previous in-memory _internal database. Those tables now appear in the internal database, with core_ prefixes (ex tables in _internal is now core_tables in internal).

A note on the new core_ tables

However, one important notes about those new core_ tables: If a --internal DB is passed in, that means those core_ tables will persist across multiple Datasette instances. This wasn't the case before, since _internal was always an in-memory database created from scratch.

I tried to put those core_ tables as TEMP tables - after all, there's always one 1 internal DB connection at a time, so I figured it would work. But, since we use the Database() wrapper for the internal DB, it has two separate connections: a default read-only connection and a write connection that is created when a write operation occurs. Which meant the TEMP tables would be created by the write connection, but not available in the read-only connection.

So I had a brillant idea: Attach an in-memory named database with cache=shared, and create those tables there!

sql ATTACH DATABASE 'file:datasette_internal_core?mode=memory&cache=shared' AS core;

We'd run this on both the read-only connection and the write-only connection. That way, those tables would stay in memory, they'd communicate with the cache=shared feature, and we'd be good to go.

However, I couldn't find an easy way to run a ATTACH DATABASE command on the read-only query.

Using Database() as a wrapper for the internal DB is pretty limiting - it's meant for Datasette "data" databases, where we want multiple readers and possibly 1 write connection at a time. But the internal database doesn't really require that kind of support - I think we could get away with a single read/write connection, but it seemed like too big of a rabbithole to go through now.


:books: Documentation preview :books:: https://datasette--2162.org.readthedocs.build/en/2162/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2162/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1861812208 PR_kwDOBm6k_c5YhH-W 2149 Start a new `datasette.yaml` configuration file, with settings support asg017 15178711 closed 0     2 2023-08-22T16:24:16Z 2023-08-23T01:26:11Z 2023-08-23T01:26:11Z CONTRIBUTOR simonw/datasette/pulls/2149

refs #2093 #2143

This is the first step to implementing the new datasette.yaml/datasette.json configuration file.

  • The old --config argument is now back, and is the path to a datasette.yaml file. Acts like the --metadata flag.
  • The old settings.json behavior has been removed.
  • The "settings" key inside datasette.yaml defines the same --settings flags
  • Values passed in --settings will over-write values in datasette.yaml

Docs for the Config file is pretty light, not much to add until we add more config to the file.


:books: Documentation preview :books:: https://datasette--2149.org.readthedocs.build/en/2149/

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2149/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1344823170 PR_kwDOBm6k_c49e3_k 1789 Add new entrypoint option to `--load-extension` asg017 15178711 closed 0     9 2022-08-19T19:27:47Z 2022-08-23T18:42:52Z 2022-08-23T18:34:30Z CONTRIBUTOR simonw/datasette/pulls/1789

Closes #1784

The --load-extension flag can now accept an optional "entrypoint" value, to specify which entrypoint SQLite should load from the given extension.

```bash

would load default entrypoint like before

datasette data.db --load-extension ext

loads the extensions with the "sqlite3_foo_init" entrpoint

datasette data.db --load-extension ext:sqlite3_foo_init

loads the extensions with the "sqlite3_bar_init" entrpoint

datasette data.db --load-extension ext:sqlite3_bar_init ```

For testing, I added a small SQLite extension in C at tests/ext.c. If compiled, then pytest will run the unit tests in test_load_extensions.pyto verify that Datasette loads in extensions correctly (and loads the correct entrypoints). Compiling the extension requires a C compiler, I compiled it on my Mac with:

gcc ext.c -I path/to/sqlite -fPIC -shared -o ext.dylib

Where path/to/sqlite is a directory that contains the SQLite amalgamation header files.

Re documentation: I added a bit to the help text for --load-extension (which I believe should auto-add to documentation?), and the existing extension documentation is spatialite specific. Let me know if a new extensions documentation page would be helpful!

datasette 107914493 pull    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1789/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
0  
1339663518 I_kwDOBm6k_c5P2aSe 1784 Include "entrypoint" option on `--load-extension`? asg017 15178711 closed 0     2 2022-08-16T00:22:57Z 2022-08-23T18:34:31Z 2022-08-23T18:34:31Z CONTRIBUTOR  

Problem

SQLite extensions have the option to define multiple "entrypoints" in each loadable extension. For example, the upcoming version of sqlite-lines will have 2 entrypoints: the default sqlite3_lines_init (which SQLite will automatically guess for) and sqlite3_lines_noread_init. The sqlite3_lines_noread_init version omits functions that read from the filesystem, which is necessary for security purposes when running untrusted SQL (which Datasette does).

(Similar multiple entrypoints will also be added for sqlite-http).

The --load-extension flag, however, doesn't give the option to specify a different entrypoint, so the default one is always used.

Proposal

I want there to be a new command line option of the --load-extension flag to specify a custom entrypoint like so: datasette my.db \ --load-extension ./lines0 sqlite3_lines0_noread_init

Then, under the hood, this line of code:

https://github.com/simonw/datasette/blob/7af67b54b7d9bca43e948510fc62f6db2b748fa8/datasette/app.py#L562

Would look something like this:

python conn.execute("SELECT load_extension(?, ?)", [extension, entrypoint])

One potential problem: For backward compatibility, I'm not sure if Click allows cli flags to have variable number of options ("arity"). So I guess it could also use a : delimiter like --static:

datasette my.db \ --load-extension ./lines0:sqlite3_lines0_noread_init

Or maybe even a new flag name?

datasette my.db \ --load-extension-entrypoint ./lines0 sqlite3_lines0_noread_init

Personally I prefer the : option... and maybe even --load-extension -> --load? Definitely out of scope for this issue tho

datasette my.db \ --load./lines0:sqlite3_lines0_noread_init

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1784/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 49.134ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows