home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

24 rows where author_association = "CONTRIBUTOR" and user = 82988 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, created_at (date), updated_at (date)

issue 18

  • Datasette Library 3
  • Integration with JupyterLab 2
  • Exposing Datasette via Jupyter-server-proxy 2
  • Incorrect URLs when served behind a proxy with base_url set 2
  • datasette.urls.static_plugins(...) method 2
  • Ability to bundle and serve additional static files 1
  • More metadata options for template authors 1
  • Handle spatialite geometry columns better 1
  • datasette publish digitalocean plugin 1
  • Linked Data(sette) 1
  • ?_where=sql-fragment parameter for table views 1
  • Datasette doesn't reload when database file changes 1
  • Every datasette plugin on the ecosystem page should have a screenshot 1
  • First proof-of-concept of Datasette Library 1
  • Provide a cookiecutter template for creating new plugins 1
  • Add template block prior to extra URL loaders 1
  • Maybe let plugins define custom serve options? 1
  • Featured table(s) on the homepage 1

user 1

  • psychemedia · 24 ✖

author_association 1

  • CONTRIBUTOR · 24 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1420941334 https://github.com/simonw/datasette/pull/564#issuecomment-1420941334 https://api.github.com/repos/simonw/datasette/issues/564 IC_kwDOBm6k_c5UsdgW psychemedia 82988 2023-02-07T15:14:10Z 2023-02-07T15:14:10Z CONTRIBUTOR

Is this feature covered by any more recent updates to datasette, or via any plugins that you're aware of?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
First proof-of-concept of Datasette Library 473288428  
1248204219 https://github.com/simonw/datasette/issues/1810#issuecomment-1248204219 https://api.github.com/repos/simonw/datasette/issues/1810 IC_kwDOBm6k_c5KZhW7 psychemedia 82988 2022-09-15T14:44:47Z 2022-09-15T14:46:26Z CONTRIBUTOR

A couple+ of possible use case examples:

  • someone has a collection of articles indexed with FTS; they want to publish a simple search tool over the results;
  • someone has an image collection and they want to be able to search over description text to return images;
  • someone has a set of locations with descriptions, and wants to run a query over places and descriptions and get results as a listing or on a map;
  • someone has a set of audio or video files with titles, descriptions and/or transcripts, and wants to be able to search over them and return playable versions of returned items.

In many cases, I suspect the raw content will be in one table, but the search table will be a second (eg FTS) table. Generally, the search may be over one or more joined tables, and the results constructed from one or more tables (which may or may not be distinct from the search tables).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Featured table(s) on the homepage 1374626873  
1010947634 https://github.com/simonw/datasette/issues/1591#issuecomment-1010947634 https://api.github.com/repos/simonw/datasette/issues/1591 IC_kwDOBm6k_c48QdYy psychemedia 82988 2022-01-12T11:32:17Z 2022-01-12T11:32:17Z CONTRIBUTOR

Is it possible to parse things like --ext-{plugin}-{arg} VALUE ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Maybe let plugins define custom serve options? 1100015398  
752098906 https://github.com/simonw/datasette/issues/417#issuecomment-752098906 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg== psychemedia 82988 2020-12-29T14:34:30Z 2020-12-29T14:34:50Z CONTRIBUTOR

FWIW, I had a look at watchdog for a datasette powered Jupyter notebook search tool: https://github.com/ouseful-testing/nbsearch/blob/main/nbsearch/nbwatchdog.py

Not a production thing, just an experiment trying to explore what might be possible...

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
720354227 https://github.com/simonw/datasette/issues/838#issuecomment-720354227 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDcyMDM1NDIyNw== psychemedia 82988 2020-11-02T09:33:58Z 2020-11-02T09:33:58Z CONTRIBUTOR

Thanks; just a note that the datasette.urls.static(path) and datasette.urls.static_plugins(plugin_name, path) items both seem to be repeated and appear in the docs twice?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097  
718528252 https://github.com/simonw/datasette/pull/1049#issuecomment-718528252 https://api.github.com/repos/simonw/datasette/issues/1049 MDEyOklzc3VlQ29tbWVudDcxODUyODI1Mg== psychemedia 82988 2020-10-29T09:20:34Z 2020-10-29T09:20:34Z CONTRIBUTOR

That workaround is probably fine. I was trying to work out whether there might be other situations where a pre-external package load might be useful but couldn't offhand bring any other examples to mind. The static plugins option also looks interesting.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Add template block prior to extra URL loaders 729017519  
716123598 https://github.com/simonw/datasette/issues/838#issuecomment-716123598 https://api.github.com/repos/simonw/datasette/issues/838 MDEyOklzc3VlQ29tbWVudDcxNjEyMzU5OA== psychemedia 82988 2020-10-25T10:20:12Z 2020-10-25T10:53:24Z CONTRIBUTOR

I'm trying to run something behind a MyBinder proxy, but seem to have something set up incorrectly and not sure what the fix is?

I'm starting datasette with jupyter-server-proxy setup:

```

init.py

def setup_nbsearch():

return {
    "command": [
        "datasette",
        "serve",
        f"{_NBSEARCH_DB_PATH}",
        "-p",
        "{port}",
        "--config",
        "base_url:{base_url}nbsearch/"
    ],
    "absolute_url": True,
    # The following needs a the labextension installing.
    # eg in postBuild: jupyter labextension install jupyterlab-server-proxy
    "launcher_entry": {
        "enabled": True,
        "title": "nbsearch",
    },
}

```

where the base_url gets automatically populated by the server-proxy. I define the loaders as:

```

init.py

from datasette import hookimpl

@hookimpl def extra_css_urls(database, table, columns, view_name, datasette): return [ "/-/static-plugins/nbsearch/prism.css", "/-/static-plugins/nbsearch/nbsearch.css", ] ``` but these seem to also need a base_url prefix set somehow?

Currently, the generated HTML loads properly but internal links are incorrect; eg they take the form <link rel="stylesheet" href="/-/static-plugins/nbsearch/prism.css"> which resolves to eg https://notebooks.gesis.org/hub/-/static-plugins/nbsearch/prism.css rather than required URL of form https://notebooks.gesis.org/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static-plugins/nbsearch/prism.css.

The main css is loaded correctly: <link rel="stylesheet" href="/binder/jupyter/user/ouseful-testing-nbsearch-0fx1mx67/nbsearch/-/static/app.css?404439">

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Incorrect URLs when served behind a proxy with base_url set 637395097  
716066000 https://github.com/simonw/datasette/issues/1033#issuecomment-716066000 https://api.github.com/repos/simonw/datasette/issues/1033 MDEyOklzc3VlQ29tbWVudDcxNjA2NjAwMA== psychemedia 82988 2020-10-24T22:58:33Z 2020-10-24T22:58:33Z CONTRIBUTOR

From the docs, I note:

datasette.urls.instance() Returns the URL to the Datasette instance root page. This is usually "/"

What about the proxy case? Eg if I am using jupyter-server-proxy on a MyBinder or local Jupyter notebook server site, https://example.com:PORT/weirdpath/datasette, what does datasette.urls.instance() refer to?

  • [ ] https://example.com:PORT/weirdpath/datasette
  • [ ] https://example.com:PORT/weirdpath/
  • [ ] https://example.com:PORT/
  • [ ] https://example.com
  • [ ] something else?
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette.urls.static_plugins(...) method 725099777  
714657366 https://github.com/simonw/datasette/issues/1033#issuecomment-714657366 https://api.github.com/repos/simonw/datasette/issues/1033 MDEyOklzc3VlQ29tbWVudDcxNDY1NzM2Ng== psychemedia 82988 2020-10-22T17:51:29Z 2020-10-22T17:51:29Z CONTRIBUTOR

How does /-/static relate to current guidance docs around static regarding the --static option and metadata formulations such as "extra_js_urls": [ "/static/app.js"] (I've not managed to get this to work in a Jupyter server proxied set up; the datasette / jupyter server proxy repo may provide a useful test example, eg via MyBinder, for folk to crib from?)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette.urls.static_plugins(...) method 725099777  
604328163 https://github.com/simonw/datasette/issues/573#issuecomment-604328163 https://api.github.com/repos/simonw/datasette/issues/573 MDEyOklzc3VlQ29tbWVudDYwNDMyODE2Mw== psychemedia 82988 2020-03-26T09:41:30Z 2020-03-26T09:41:30Z CONTRIBUTOR

Fixed by @simonw; example here: https://github.com/simonw/jupyterserverproxy-datasette-demo

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exposing Datasette via Jupyter-server-proxy 492153532  
586599424 https://github.com/simonw/datasette/issues/417#issuecomment-586599424 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA== psychemedia 82988 2020-02-15T15:12:19Z 2020-02-15T15:12:33Z CONTRIBUTOR

So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
559632608 https://github.com/simonw/datasette/issues/573#issuecomment-559632608 https://api.github.com/repos/simonw/datasette/issues/573 MDEyOklzc3VlQ29tbWVudDU1OTYzMjYwOA== psychemedia 82988 2019-11-29T01:43:38Z 2019-11-29T01:43:38Z CONTRIBUTOR

In passing, it looks like a start was made on a datasette Jupyter server extension in https://github.com/lucasdurand/jupyter-datasette although the build fails in MyBinder.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exposing Datasette via Jupyter-server-proxy 492153532  
559207224 https://github.com/simonw/datasette/issues/642#issuecomment-559207224 https://api.github.com/repos/simonw/datasette/issues/642 MDEyOklzc3VlQ29tbWVudDU1OTIwNzIyNA== psychemedia 82988 2019-11-27T18:40:57Z 2019-11-27T18:41:07Z CONTRIBUTOR

Would cookie cutter approaches also work for creating various flavours of customised templates?

I need to try to create a couple of sites for myself to get a feel for what sorts of thing are easily doable, and what cribbable cookie cutter items might be. I'm guessing https://simonwillison.net/2019/Nov/25/niche-museums/ is a good place to start from?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Provide a cookiecutter template for creating new plugins 529429214  
509013413 https://github.com/simonw/datasette/issues/507#issuecomment-509013413 https://api.github.com/repos/simonw/datasette/issues/507 MDEyOklzc3VlQ29tbWVudDUwOTAxMzQxMw== psychemedia 82988 2019-07-07T16:31:57Z 2019-07-07T16:31:57Z CONTRIBUTOR

Chrome and Firefox both support headless screengrabs from command line, but I don't know how parameterised they can be?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Every datasette plugin on the ecosystem page should have a screenshot 455852801  
483202658 https://github.com/simonw/datasette/issues/429#issuecomment-483202658 https://api.github.com/repos/simonw/datasette/issues/429 MDEyOklzc3VlQ29tbWVudDQ4MzIwMjY1OA== psychemedia 82988 2019-04-15T10:48:01Z 2019-04-15T10:48:01Z CONTRIBUTOR

Minor UI observation:

_where= renders a [remove] link whereas _facet= gets a cross to remove it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
?_where=sql-fragment parameter for table views 432636432  
483017176 https://github.com/simonw/datasette/issues/431#issuecomment-483017176 https://api.github.com/repos/simonw/datasette/issues/431 MDEyOklzc3VlQ29tbWVudDQ4MzAxNzE3Ng== psychemedia 82988 2019-04-14T16:58:37Z 2019-04-14T16:58:37Z CONTRIBUTOR

Hmm... nope... I see an updated timestamp from ls -al on the db but no reload?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette doesn't reload when database file changes 432870248  
474282321 https://github.com/simonw/datasette/issues/412#issuecomment-474282321 https://api.github.com/repos/simonw/datasette/issues/412 MDEyOklzc3VlQ29tbWVudDQ3NDI4MjMyMQ== psychemedia 82988 2019-03-19T10:09:46Z 2019-03-19T10:09:46Z CONTRIBUTOR

Does this also relate to https://github.com/simonw/datasette/issues/283 and the ability to ATTACH DATABASE?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Linked Data(sette) 411257981  
474280581 https://github.com/simonw/datasette/issues/417#issuecomment-474280581 https://api.github.com/repos/simonw/datasette/issues/417 MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ== psychemedia 82988 2019-03-19T10:06:42Z 2019-03-19T10:06:42Z CONTRIBUTOR

This would be really interesting but several possibilities in use arise, I think?

For example:

  • I put a new CSV file into the import dir and a new table is created therefrom
  • I put a CSV file into the import dir that replaces a previous file / table of the same name as a pre-existing table (eg files that contain monthly data in year to date). The data may also patch previous months, so a full replace / DROP on the original table may well be in order.
  • I put a CSV file into the import dir that updates a table of the same name as a pre-existing table (eg files that contain last month's data)

CSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form MYTABLENAME-February2019.csv etc

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Datasette Library 421546944  
459915995 https://github.com/simonw/datasette/issues/160#issuecomment-459915995 https://api.github.com/repos/simonw/datasette/issues/160 MDEyOklzc3VlQ29tbWVudDQ1OTkxNTk5NQ== psychemedia 82988 2019-02-02T00:43:16Z 2019-02-02T00:58:20Z CONTRIBUTOR

Do you have any simple working examples of how to use --static? Inspection of default served files suggests locations such as http://example.com/-/static/app.css?0e06ee.

If datasette is being proxied to http://example.com/foo/datasette, what form should arguments to --static take so that static files are correctly referenced?

Use case is here: https://github.com/psychemedia/jupyterserverproxy-datasette-demo Trying to do a really simple datasette demo in MyBinder using jupyter-server-proxy.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Ability to bundle and serve additional static files 278208011  
436042445 https://github.com/simonw/datasette/issues/370#issuecomment-436042445 https://api.github.com/repos/simonw/datasette/issues/370 MDEyOklzc3VlQ29tbWVudDQzNjA0MjQ0NQ== psychemedia 82988 2018-11-05T21:30:42Z 2018-11-05T21:31:48Z CONTRIBUTOR

Another route would be something like creating a datasette IPython magic for notebooks to take a dataframe and easily render it as a datasette. You'd need to run the app in the background rather than block execution in the notebook. Related to that, or to publishing a dataframe in notebook cell for use in other cells in a non-blocking way, there may be cribs in something like https://github.com/micahscopes/nbmultitask .

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Integration with JupyterLab 377155320  
436037692 https://github.com/simonw/datasette/issues/370#issuecomment-436037692 https://api.github.com/repos/simonw/datasette/issues/370 MDEyOklzc3VlQ29tbWVudDQzNjAzNzY5Mg== psychemedia 82988 2018-11-05T21:15:47Z 2018-11-05T21:18:37Z CONTRIBUTOR

In terms of integration with pandas, I was pondering two different ways datasette/csvs_to_sqlite integration may work:

  • like pandasql, to provide a SQL query layer either by a direct connection to the sqlite db or via datasette API;
  • as an improvement of pandas.to_sql(), which is a bit ropey (e.g. pandas.to_sql_from_csvs(), routing the dataframe to sqlite via csvs_tosqlite rather than the dodgy mapping that pandas supports).

The pandas.publish_* idea could be quite interesting though... Would it be useful/fruitful to think about publish_ as a complement to pandas.to_?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Integration with JupyterLab 377155320  
435862009 https://github.com/simonw/datasette/issues/371#issuecomment-435862009 https://api.github.com/repos/simonw/datasette/issues/371 MDEyOklzc3VlQ29tbWVudDQzNTg2MjAwOQ== psychemedia 82988 2018-11-05T12:48:35Z 2018-11-05T12:48:35Z CONTRIBUTOR

I think you need to register a domain name you own separately in order to get a non-IP address address? https://www.digitalocean.com/docs/networking/dns/

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
datasette publish digitalocean plugin 377156339  
401310732 https://github.com/simonw/datasette/issues/276#issuecomment-401310732 https://api.github.com/repos/simonw/datasette/issues/276 MDEyOklzc3VlQ29tbWVudDQwMTMxMDczMg== psychemedia 82988 2018-06-29T10:05:04Z 2018-06-29T10:07:25Z CONTRIBUTOR

@russs Different map projections can presumably be handled on the client side using a leaflet plugin to transform the geometry (eg kartena/Proj4Leaflet) although the leaflet side would need to detect or be informed of the original projection?

Another possibility would be to provide an easy way/guidance for users to create an FK'd table containing the WGS84 projection of a non-WGS84 geometry in the original/principle table? This could then as a proxy for serving GeoJSON to the leaflet map?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Handle spatialite geometry columns better 324835838  
360535979 https://github.com/simonw/datasette/issues/179#issuecomment-360535979 https://api.github.com/repos/simonw/datasette/issues/179 MDEyOklzc3VlQ29tbWVudDM2MDUzNTk3OQ== psychemedia 82988 2018-01-25T17:18:24Z 2018-01-25T17:18:24Z CONTRIBUTOR

To summarise that thread:

  • expose full metadata.json object to the index page template, eg to allow tables to be referred to by name;
  • ability to import multiple metadata.json files, eg to allow metadata files created for a specific SQLite db to be reused in a datasette referring to several database files;

It could also be useful to allow users to import a python file containing custom functions that can that be loaded into scope and made available to custom templates.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
More metadata options for template authors  288438570  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 42.106ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows