home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

9 rows where state = "closed", type = "issue" and user = 536941 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

repo 2

  • datasette 5
  • sqlite-utils 4

type 1

  • issue · 9 ✖

state 1

  • closed · 9 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
1044267332 I_kwDOCGYnMM4-PkFE 336 sqlite-util tranform --column-order mangles columns of type "timestamp" fgregg 536941 closed 0     1 2021-11-04T01:15:38Z 2023-05-08T21:13:38Z 2023-05-08T21:13:38Z CONTRIBUTOR  

Reproducible code below:

```bash

echo 'create table bar (baz text, created_at timestamp default CURRENT_TIMESTAMP)' | sqlite3 foo.db sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE bar (baz text, created_at timestamp default CURRENT_TIMESTAMP); sqlite> .exit sqlite-utils transform foo.db bar --column-order baz sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE IF NOT EXISTS "bar" ( [baz] TEXT, [created_at] FLOAT DEFAULT 'CURRENT_TIMESTAMP' ); sqlite> .exit sqlite-utils transform foo.db bar --column-order baz sqlite3 foo.db SQLite version 3.36.0 2021-06-18 18:36:39 Enter ".help" for usage hints. sqlite> .schema bar CREATE TABLE IF NOT EXISTS "bar" ( [baz] TEXT, [created_at] FLOAT DEFAULT '''CURRENT_TIMESTAMP''' ); ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/336/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1448143294 I_kwDOBm6k_c5WUOm- 1890 Autocomplete text entry for filter values that correspond to facets fgregg 536941 closed 0     16 2022-11-14T14:11:31Z 2022-11-17T00:47:36Z 2022-11-16T03:23:01Z CONTRIBUTOR  

datasette allows users to enter in the value for named parameters into a free-text form field.

I think it would add a lot of usability, if the form field could be a drop down of options when query value is already a faceted column.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1890/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1400083043 I_kwDOBm6k_c5Tc5Jj 1834 inspect data is not used for caching database hash fgregg 536941 closed 0     0 2022-10-06T17:52:01Z 2022-10-06T20:06:21Z 2022-10-06T20:06:08Z CONTRIBUTOR  

When databases are loaded,

https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/app.py#L257-L260

there is nothing preventing the rehashing of the database for immutable databases.

https://github.com/simonw/datasette/blob/cb1e093fd361b758120aefc1a444df02462389a3/datasette/database.py#L50-L53

what i might expect is that relevant values of inspect_data get passed to the Database class to prevent re-hashing?

With data that is many gigs large, this is a significant start up time.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1834/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1334628400 I_kwDOBm6k_c5PjNAw 1779 google cloudrun updated their limits on maxscale based on memory and cpu count fgregg 536941 closed 0   Datasette 0.62 8303187 13 2022-08-10T13:27:21Z 2022-08-14T19:42:59Z 2022-08-14T17:07:34Z CONTRIBUTOR  

if you don't set an explicit limit on container scaling, then google defaults to 100

google recently updated the limits on container scaling, such that if you set up datasette to use more memory or cpu, then you need to set the maxScale argument much smaller than 100.

would be nice if datasette publish could do this math for you and set the right maxScale.

Log of an failing publish run.

ERROR: (gcloud.run.deploy) spec.template.spec.containers[0].resources.limits.cpu: Invalid value specified for cpu. For the specified value, maxScale may not exceed 15. Consider running your workload in a region with greater capacity, decreasing your requested cpu-per-instance, or requesting an increase in quota for this region if you are seeing sustained usage near this limit, see https://cloud.google.com/run/quotas. Your project may gain access to further scaling by adding billing information to your account. Traceback (most recent call last): File "/home/runner/.local/bin/datasette", line 8, in <module> sys.exit(cli()) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1659, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/runner/.local/lib/python3.8/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/home/runner/.local/lib/python3.8/site-packages/datasette/publish/cloudrun.py", line 160, in cloudrun check_call( File "/usr/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'gcloud run deploy --allow-unauthenticated --platform=managed --image gcr.io/labordata/datasette warehouse --memory 8Gi --cpu 2' returned non-zero exit status 1.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1779/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1089529555 I_kwDOBm6k_c5A8ObT 1581 when hashed urls are turned on, the _memory db has improperly long-lived cache expiry fgregg 536941 closed 0     1 2021-12-28T00:05:48Z 2022-03-24T04:08:18Z 2022-03-24T04:08:18Z CONTRIBUTOR  

if hashed_urls are on, then a -000 suffix is added to the _memory database, and the cache settings are set just as if it was a normal hashed database.

in particular, this header is set:

cache-control: max-age=31536000

this is not appropriate because the _memory-000 database isn't really hashed based on the contents of the databases (see #1561).

Either the cache-control header should be changed, or the _memory db should have a hash suffix that does depend on the contents of the databases.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1581/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1082765654 I_kwDOBm6k_c5AibFW 1561 add hash id to "_memory" url if hashed url mode is turned on and crossdb is also turned on fgregg 536941 closed 0     3 2021-12-17T00:45:12Z 2022-03-19T04:45:40Z 2022-03-19T04:45:40Z CONTRIBUTOR  

If hashed_url mode is turned on and crossdb is also turned on, then queries to _memory should have a hash_id.

One way that it could work is to have the _memory hash be a hash of all the individual databases.

Otherwise, crossdb queries can get quit out of data if using aggressive caching.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1561/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1126692066 I_kwDOCGYnMM5DJ_Ti 403 Document how to add a primary key to a rowid table using `sqlite-utils transform --pk` fgregg 536941 closed 0     4 2022-02-08T01:39:40Z 2022-02-09T04:22:43Z 2022-02-08T19:33:59Z CONTRIBUTOR  

Original title: Add option for adding a new, serial, primary key

sometimes we have tables that don't have primary keys, but ought to have them. we can use rowid for that, but it would often be nicer to have an explicit primary key. using the current value of rowid would be fine.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/403/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1096558279 I_kwDOCGYnMM5BXCbH 365 create-index should run analyze after creating index fgregg 536941 closed 0   3.21 7558727 16 2022-01-07T18:21:25Z 2022-01-11T02:43:34Z 2022-01-11T01:36:48Z CONTRIBUTOR  

sqlite's query planner depends upon analyze to make good use of indices. It would be nice if analyze was run as part of the create-index command.

If data is inserted later, things can get out date, but it would still probably be a net win.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/365/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
1077102934 I_kwDOCGYnMM5AM0lW 353 Allow passing a file of code to "sqlite-utils convert" fgregg 536941 closed 0     8 2021-12-10T18:06:14Z 2021-12-11T01:38:29Z 2021-12-11T01:09:39Z CONTRIBUTOR  

sqlite-utils is so nice, but the ergonomics of the multiline code in kind of tough. It's really hard (maybe impossible) to make the newlines play well with Makefiles.

it would be great to write your code fragment in a separate file and direct it into the sqlite-utils

either like

sqlite-utils convert my.db my_table my_column < custom_code.py

or

sqlite-utils convert my.db my_table my_column --custom-code=custom_code.py

Thanks, as ever, for these great tools!

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/353/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 43.035ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows