home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

7 rows where user = 7936571 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 5
  • open 2

type 1

  • issue 7

repo 1

  • datasette 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
453131917 MDU6SXNzdWU0NTMxMzE5MTc= 502 Exporting sqlite database(s)? chrismp 7936571 closed 0     3 2019-06-06T16:39:53Z 2021-04-03T05:16:54Z 2019-06-11T18:50:42Z NONE  

I'm working on datasette from one computer. But if I want to work on it from another computer and want to copy the SQLite database(s) already on the Heroku datasette instance, how to I copy the database(s) to the second computer so that I can then update it and push to online via datasette's command line code that pushes code to Heroku?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/502/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
459397625 MDU6SXNzdWU0NTkzOTc2MjU= 514 Documentation with recommendations on running Datasette in production without using Docker chrismp 7936571 closed 0   Datasette 0.50 5971510 27 2019-06-21T22:48:12Z 2020-10-08T23:55:53Z 2020-10-08T23:33:05Z NONE  

I've got some SQLite databases too big to push to Heroku or the other services with built-in support in datasette.

So instead I moved my datasette code and databases to a remote server on Kimsufi. In the folder containing the SQLite databases I run the following code.

nohup datasette serve -h 0.0.0.0 *.db --cors --port 8000 --metadata metadata.json > output.log 2>&1 &.

When I go to http://my-remote-server.com:8000, the site loads. But I know this is not a good long-term solution to running datasette on this server.

What is the "correct" way to have this site run, preferably on server port 80?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/514/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
451513541 MDU6SXNzdWU0NTE1MTM1NDE= 498 Full text search of all tables at once? chrismp 7936571 closed 0     12 2019-06-03T14:24:43Z 2020-05-30T17:26:02Z 2020-05-30T17:26:02Z NONE  

Does datasette have a built-in way, in a browser, to do a full-text search of all columns, in all databases and tables, that have full-text search enabled? Is there a plugin that does this?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/498/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
453243459 MDU6SXNzdWU0NTMyNDM0NTk= 503 Handle SQLite databases with spaces in their names? chrismp 7936571 closed 0 simonw 9599   1 2019-06-06T21:20:59Z 2019-11-04T23:16:30Z 2019-11-04T23:16:30Z NONE  

I named my SQLite database "Government workers" and published it to Heroku. When I clicked the "Government workers" database online it lead to a 404 page: Database not found: Government%20workers.

I believe this is because the database name has a space.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/503/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
457147936 MDU6SXNzdWU0NTcxNDc5MzY= 512 "about" parameter in metadata does not appear when alone chrismp 7936571 open 0     3 2019-06-17T21:04:20Z 2019-10-11T15:49:13Z   NONE  

Here's an example of metadata I have for one database on datasette.

"Records-requests": { "tables": { "Some table": { "about": "This table has data." } } }

The text in about does not show up when I publish the data. But it shows up after I add a "source" parameter in the metadata.

Is this intended?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/512/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
457201907 MDU6SXNzdWU0NTcyMDE5MDc= 513 Is it possible to publish to Heroku despite slug size being too large? chrismp 7936571 closed 0     2 2019-06-18T00:12:02Z 2019-06-21T22:35:54Z 2019-06-21T22:35:54Z NONE  

I'm trying to push more than 1.5GB worth of SQLite databases -- 535MB compressed -- to Heroku but I get this error when I run the datasette publish heroku command.

Compiled slug size: 535.5M is too large (max is 500M).

Can I publish the databases and make datasette work on Heroku despite the large slug size?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/513/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
451585764 MDU6SXNzdWU0NTE1ODU3NjQ= 499 Accessibility for non-techie newsies? chrismp 7936571 open 0     3 2019-06-03T16:49:37Z 2019-06-05T21:22:55Z   NONE  

Hi again, I'm having fun uploading datasets to Heroku via datasette. I'd like to set up datasette so that it's easy for other newsroom workers, who don't use Linux and aren't programmers, to upload datasets. Does datsette provide this out-of-the-box, or as a plugin?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/499/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 76.089ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows