issue_comments
2 rows where issue = 317001500 and user = 193185 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- datasette publish lambda plugin · 2 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
612216820 | https://github.com/simonw/datasette/issues/236#issuecomment-612216820 | https://api.github.com/repos/simonw/datasette/issues/236 | MDEyOklzc3VlQ29tbWVudDYxMjIxNjgyMA== | cldellow 193185 | 2020-04-10T21:03:38Z | 2020-04-10T21:03:38Z | CONTRIBUTOR | I made a repo at https://github.com/code402/datasette-lambda to demonstrate the idea, and scratch my personal itch for this. The demo relies on some central authority having already published a public, reusable Lambda layer with Datasette & its dependencies. I think that differs from the other publish plugins which seem to mainly publish Dockerfiles that the host will interpret to install deps from a requirements.txt file. I chose that approach because
If it's the case that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette publish lambda plugin 317001500 | |
608716819 | https://github.com/simonw/datasette/issues/236#issuecomment-608716819 | https://api.github.com/repos/simonw/datasette/issues/236 | MDEyOklzc3VlQ29tbWVudDYwODcxNjgxOQ== | cldellow 193185 | 2020-04-03T22:19:00Z | 2020-04-03T22:19:00Z | CONTRIBUTOR | Hi Simon, I'm thinking of attempting this. Can you clarify some questions I have? 1) I assume the goal is to have a CORS-friendly HTTPS endpoint that hosts the datasette service + user's db. 2) If that's the goal, I think Lambda alone is insufficient. Lambda provides the compute fabric, but not the HTTP routing. You'd also need to add Application Load Balancer or API Gateway to provide an HTTP endpoint that routes to the lambda function. Do you have a preference between ALB or API GW? ALB has better economics at scale, but has a minimum monthly cost. API GW has worse per-request economics, but scales to zero when no requests are happening. 3) Does Datasette have any native components, or is it all pure python? If it has native bits, they'll likely need to be recompiled to work on Amazon Linux 2. 4) There are a few disparate services that need to be wired together to expose a Python service securely to the web. If I was doing this outside of the datasette publish system, I'd use an AWS CloudFormation template. Even within datasette, I think it still makes sense to use a CloudFormation template and just have the publish plugin invoke it (via the standard Thanks for your help! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
datasette publish lambda plugin 317001500 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1