issues
5 rows where repo = 107914493 and user = 12617395 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
340396247 | MDU6SXNzdWUzNDAzOTYyNDc= | 339 | Expose SANIC_RESPONSE_TIMEOUT config option in a sensible way | bsilverm 12617395 | closed | 0 | 4 | 2018-07-11T20:38:06Z | 2022-03-21T22:22:40Z | 2022-03-21T22:22:34Z | NONE | Is it possible to configure the sql_time_limit_ms beyond 60 seconds? It seems queries are still timing out at 60 seconds when sql_time_limit_ms is set to 180000. We have a very large data set and often encounter timeouts when testing new queries from the datasette UI. We are optimizing our database as much as we can, but still may require more than 60 seconds for complex queries. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
510076368 | MDU6SXNzdWU1MTAwNzYzNjg= | 605 | Support queries at the table level | bsilverm 12617395 | open | 0 | 2 | 2019-10-21T15:58:30Z | 2019-10-30T18:55:37Z | NONE | Per the issue described in issue #588, it was determined queries are not supported at the table level. Per my last comment in the issue, I'd like to request support for this as it would help eliminate errors in the event certain tables are not present in the database. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
505512251 | MDU6SXNzdWU1MDU1MTIyNTE= | 588 | Queries per DB table in metadata.json | bsilverm 12617395 | closed | 0 | 3 | 2019-10-10T21:08:19Z | 2019-10-21T12:58:22Z | 2019-10-21T01:48:42Z | NONE | It doesn't appear possible to have separate queries defined per database table. When I do something like below, my table descriptions show up but not the queries:
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
341123355 | MDU6SXNzdWUzNDExMjMzNTU= | 342 | Requesting support for query description | bsilverm 12617395 | closed | 0 | 4 | 2018-07-13T18:50:16Z | 2018-07-24T04:53:21Z | 2018-07-16T02:33:54Z | NONE | It would be great if the metadata file allowed you to enter a description for the query. We have a lot of pre-defined queries that can only be so descriptive by their name. It would be nice if an optional description could be included underneath the name within the UI, or on hover where it currently shows the SQL. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | ||||||
334190959 | MDU6SXNzdWUzMzQxOTA5NTk= | 321 | Wildcard support in query parameters | bsilverm 12617395 | closed | 0 | 0.23.1 3439337 | 8 | 2018-06-20T18:03:56Z | 2018-06-21T17:00:10Z | 2018-06-21T04:55:26Z | NONE | I haven't found a way to get the wildcard (%) inserted automatically in to a query parameter. This would be useful for cases the query parameter is followed by a LIKE clause. Wrapping the parameter name using the wildcard character within the metadata file (ie - ...where xyz like %:querystring%) does not seem to work. Can this be made possible? Or if not, can the template be extended to provide a tip to the user that they need to insert the wildcard characters themselves? |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);