home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where issue = 1656432059 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • simonw 12

issue 1

  • WIP new JSON for queries · 12 ✖

author_association 1

  • OWNER 12
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1652296467 https://github.com/simonw/datasette/pull/2053#issuecomment-1652296467 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5ifAsT simonw 9599 2023-07-26T18:26:44Z 2023-07-26T18:26:44Z OWNER

I'm abandoning this branch in favour of a fresh attempt, described here: - https://github.com/simonw/datasette/issues/2109

I'll copy bits and pieces of this branch across as-needed.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1651904060 https://github.com/simonw/datasette/pull/2053#issuecomment-1651904060 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5idg48 simonw 9599 2023-07-26T14:19:00Z 2023-07-26T15:25:15Z OWNER

Worth noting that the register_output_renderer() is actually pretty easy to extend, because it returns a dictionary which could have more keys (like the required set of extras) added to it: python @hookimpl def register_output_renderer(datasette): return { "extension": "test", "render": render_demo, "can_render": can_render_demo, # Optional } https://docs.datasette.io/en/0.64.3/plugin_hooks.html#register-output-renderer-datasette

One slight hiccup with that plugin hook is this:

rows - list of sqlite3.Row objects

I could turn that into a Datasette defined object that behaves like a sqlite3.Row though, which would give me extra flexibility in the future.

A bit tricky though since it's implemented in C for performance: https://github.com/python/cpython/blob/b0202a4e5d6b629ba5acbc703e950f08ebaf07df/Modules/_sqlite/row.c

Pasted that into Claude for the following explanation:

  • pysqlite_Row is the structure defining the Row object. It contains the tuple of data (self->data) and description of columns (self->description).
  • pysqlite_row_new() is the constructor which creates a new Row object given a cursor and tuple of data.
  • pysqlite_row_dealloc() frees the memory when Row object is deleted.
  • pysqlite_row_keys() returns the column names of the row.
  • pysqlite_row_length() and pysqlite_row_subscript() implement sequence like behavior to access row elements by index.
  • pysqlite_row_subscript() also allows accessing by column name by doing a lookup in description.
  • pysqlite_row_hash() and pysqlite_row_richcompare() implement equality checks and hash function.

I could use protocols in Python to make my own DatasetteRow which can be used interchangeably with sqlite3.Row - https://docs.python.org/3/library/typing.html#typing.Protocol

Turned this into a TIL: https://til.simonwillison.net/python/protocols

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1651894668 https://github.com/simonw/datasette/pull/2053#issuecomment-1651894668 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5idemM simonw 9599 2023-07-26T14:14:34Z 2023-07-26T14:14:34Z OWNER

Another point of confusion is how /content sometimes serves the database index page (with a list of tables) and sometimes solves the results of a query.

I could resolve this by turning the information on the index page into extras, which can optionally be requested any time a query is run but default to being shown if there is no query.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1651883505 https://github.com/simonw/datasette/pull/2053#issuecomment-1651883505 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5idb3x simonw 9599 2023-07-26T14:08:20Z 2023-07-26T14:08:20Z OWNER

I think the hardest part of getting this working is dealing with the different formats.

Idea: refactor .html as a format (since it's by far the most complex) and tweak the plugin hook a bit as part of that, then use what I learn from that to get the other formats working.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1651874649 https://github.com/simonw/datasette/pull/2053#issuecomment-1651874649 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5idZtZ simonw 9599 2023-07-26T14:03:37Z 2023-07-26T14:03:37Z OWNER

Big chunk of commented-out code I just removed: ```python

import pdb

pdb.set_trace()

if isinstance(output, dict) and output.get("ok") is False:
    # TODO: Other error codes?

    response.status_code = 400

if datasette.cors:
    add_cors_headers(response.headers)

return response

# registry = Registry(
#     extra_count,
#     extra_facet_results,
#     extra_facets_timed_out,
#     extra_suggested_facets,
#     facet_instances,
#     extra_human_description_en,
#     extra_next_url,
#     extra_columns,
#     extra_primary_keys,
#     run_display_columns_and_rows,
#     extra_display_columns,
#     extra_display_rows,
#     extra_debug,
#     extra_request,
#     extra_query,
#     extra_metadata,
#     extra_extras,
#     extra_database,
#     extra_table,
#     extra_database_color,
#     extra_table_actions,
#     extra_filters,
#     extra_renderers,
#     extra_custom_table_templates,
#     extra_sorted_facet_results,
#     extra_table_definition,
#     extra_view_definition,
#     extra_is_view,
#     extra_private,
#     extra_expandable_columns,
#     extra_form_hidden_args,
# )

results = await registry.resolve_multi(
    ["extra_{}".format(extra) for extra in extras]
)
data = {
    "ok": True,
    "next": next_value and str(next_value) or None,
}
data.update(
    {
        key.replace("extra_", ""): value
        for key, value in results.items()
        if key.startswith("extra_") and key.replace("extra_", "") in extras
    }
)
raw_sqlite_rows = rows[:page_size]
data["rows"] = [dict(r) for r in raw_sqlite_rows]

private = False
if canned_query:
    # Respect canned query permissions
    visible, private = await datasette.check_visibility(
        request.actor,
        permissions=[
            ("view-query", (database, canned_query)),
            ("view-database", database),
            "view-instance",
        ],
    )
    if not visible:
        raise Forbidden("You do not have permission to view this query")

else:
    await datasette.ensure_permissions(request.actor, [("execute-sql", database)])

# If there's no sql, show the database index page
if not sql:
    return await database_index_view(request, datasette, db)

validate_sql_select(sql)

# Extract any :named parameters
named_parameters = named_parameters or await derive_named_parameters(db, sql)
named_parameter_values = {
    named_parameter: params.get(named_parameter) or ""
    for named_parameter in named_parameters
    if not named_parameter.startswith("_")
}

# Set to blank string if missing from params
for named_parameter in named_parameters:
    if named_parameter not in params and not named_parameter.startswith("_"):
        params[named_parameter] = ""

extra_args = {}
if params.get("_timelimit"):
    extra_args["custom_time_limit"] = int(params["_timelimit"])
if _size:
    extra_args["page_size"] = _size

templates = [f"query-{to_css_class(database)}.html", "query.html"]
if canned_query:
    templates.insert(
        0,
        f"query-{to_css_class(database)}-{to_css_class(canned_query)}.html",
    )

query_error = None

# Execute query - as write or as read
if write:
    raise NotImplementedError("Write queries not yet implemented")
    # if request.method == "POST":
    #     # If database is immutable, return an error
    #     if not db.is_mutable:
    #         raise Forbidden("Database is immutable")
    #     body = await request.post_body()
    #     body = body.decode("utf-8").strip()
    #     if body.startswith("{") and body.endswith("}"):
    #         params = json.loads(body)
    #         # But we want key=value strings
    #         for key, value in params.items():
    #             params[key] = str(value)
    #     else:
    #         params = dict(parse_qsl(body, keep_blank_values=True))
    #     # Should we return JSON?
    #     should_return_json = (
    #         request.headers.get("accept") == "application/json"
    #         or request.args.get("_json")
    #         or params.get("_json")
    #     )
    #     if canned_query:
    #         params_for_query = MagicParameters(params, request, self.ds)
    #     else:
    #         params_for_query = params
    #     ok = None
    #     try:
    #         cursor = await self.ds.databases[database].execute_write(
    #             sql, params_for_query
    #         )
    #         message = metadata.get(
    #             "on_success_message"
    #         ) or "Query executed, {} row{} affected".format(
    #             cursor.rowcount, "" if cursor.rowcount == 1 else "s"
    #         )
    #         message_type = self.ds.INFO
    #         redirect_url = metadata.get("on_success_redirect")
    #         ok = True
    #     except Exception as e:
    #         message = metadata.get("on_error_message") or str(e)
    #         message_type = self.ds.ERROR
    #         redirect_url = metadata.get("on_error_redirect")
    #         ok = False
    #     if should_return_json:
    #         return Response.json(
    #             {
    #                 "ok": ok,
    #                 "message": message,
    #                 "redirect": redirect_url,
    #             }
    #         )
    #     else:
    #         self.ds.add_message(request, message, message_type)
    #         return self.redirect(request, redirect_url or request.path)
    # else:

    #     async def extra_template():
    #         return {
    #             "request": request,
    #             "db_is_immutable": not db.is_mutable,
    #             "path_with_added_args": path_with_added_args,
    #             "path_with_removed_args": path_with_removed_args,
    #             "named_parameter_values": named_parameter_values,
    #             "canned_query": canned_query,
    #             "success_message": request.args.get("_success") or "",
    #             "canned_write": True,
    #         }

    #     return (
    #         {
    #             "database": database,
    #             "rows": [],
    #             "truncated": False,
    #             "columns": [],
    #             "query": {"sql": sql, "params": params},
    #             "private": private,
    #         },
    #         extra_template,
    #         templates,
    #     )

# Not a write
rows = []
if canned_query:
    params_for_query = MagicParameters(params, request, datasette)
else:
    params_for_query = params
try:
    results = await datasette.execute(
        database, sql, params_for_query, truncate=True, **extra_args
    )
    columns = [r[0] for r in results.description]
    rows = list(results.rows)
except sqlite3.DatabaseError as e:
    query_error = e
    results = None
    columns = []

allow_execute_sql = await datasette.permission_allowed(
    request.actor, "execute-sql", database
)

format_ = request.url_vars.get("format") or "html"

if format_ == "csv":
    raise NotImplementedError("CSV format not yet implemented")
elif format_ in datasette.renderers.keys():
    # Dispatch request to the correct output format renderer
    # (CSV is not handled here due to streaming)
    result = call_with_supported_arguments(
        datasette.renderers[format_][0],
        datasette=datasette,
        columns=columns,
        rows=rows,
        sql=sql,
        query_name=None,
        database=db.name,
        table=None,
        request=request,
        view_name="table",  # TODO: should this be "query"?
        # These will be deprecated in Datasette 1.0:
        args=request.args,
        data={
            "rows": rows,
        },  # TODO what should this be?
    )
    result = await await_me_maybe(result)
    if result is None:
        raise NotFound("No data")
    if isinstance(result, dict):
        r = Response(
            body=result.get("body"),
            status=result.get("status_code") or 200,
            content_type=result.get("content_type", "text/plain"),
            headers=result.get("headers"),
        )
    elif isinstance(result, Response):
        r = result
        # if status_code is not None:
        #     # Over-ride the status code
        #     r.status = status_code
    else:
        assert False, f"{result} should be dict or Response"
elif format_ == "html":
    headers = {}
    templates = [f"query-{to_css_class(database)}.html", "query.html"]
    template = datasette.jinja_env.select_template(templates)
    alternate_url_json = datasette.absolute_url(
        request,
        datasette.urls.path(path_with_format(request=request, format="json")),
    )
    headers.update(
        {
            "Link": '{}; rel="alternate"; type="application/json+datasette"'.format(
                alternate_url_json
            )
        }
    )
    r = Response.html(
        await datasette.render_template(
            template,
            dict(
                data,
                append_querystring=append_querystring,
                path_with_replaced_args=path_with_replaced_args,
                fix_path=datasette.urls.path,
                settings=datasette.settings_dict(),
                # TODO: review up all of these hacks:
                alternate_url_json=alternate_url_json,
                datasette_allow_facet=(
                    "true" if datasette.setting("allow_facet") else "false"
                ),
                is_sortable=any(c["sortable"] for c in data["display_columns"]),
                allow_execute_sql=await datasette.permission_allowed(
                    request.actor, "execute-sql", resolved.db.name
                ),
                query_ms=1.2,
                select_templates=[
                    f"{'*' if template_name == template.name else ''}{template_name}"
                    for template_name in templates
                ],
            ),
            request=request,
            view_name="table",
        ),
        headers=headers,
    )
else:
    assert False, "Invalid format: {}".format(format_)
# if next_url:
#     r.headers["link"] = f'<{next_url}>; rel="next"'
return r

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1565058994 https://github.com/simonw/datasette/pull/2053#issuecomment-1565058994 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dSOey simonw 9599 2023-05-26T23:13:02Z 2023-05-26T23:13:02Z OWNER

I should have an extra called extra_html_context which bundles together all of the weird extra stuff needed by the HTML template, and is then passed as the root context when the template is rendered (with the other stuff from extras patched into it).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1563793781 https://github.com/simonw/datasette/pull/2053#issuecomment-1563793781 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dNZl1 simonw 9599 2023-05-26T04:27:55Z 2023-05-26T04:27:55Z OWNER

I should split out a canned_query.html template too, as something that extends the query.html template.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1563667574 https://github.com/simonw/datasette/pull/2053#issuecomment-1563667574 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dM6x2 simonw 9599 2023-05-26T00:40:22Z 2023-05-26T00:40:22Z OWNER

Or maybe...

  • BaseQueryView(View) - knows how to render the results of a SQL query
  • QueryView(BaseQueryView) - renders from ?sql=
  • CannedQueryView(BaseQueryView) - renders for a named canned query

And then later perhaps:

  • RowQueryView(BaseQueryView) - renders the select * from t where pk = ?
  • TableQueryView(BaseQueryView) - replaces the super complex existing TableView
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1563663925 https://github.com/simonw/datasette/pull/2053#issuecomment-1563663925 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dM541 simonw 9599 2023-05-26T00:32:47Z 2023-05-26T00:35:47Z OWNER

I'm going to entirely split canned queries off from ?sql= queries - they share a bunch of code right now which is just making everything much harder to follow.

I'll refactor their shared bits into functions that they both call.

Or maybe I'll try having CannedQueryView as a subclass of QueryView.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1563663616 https://github.com/simonw/datasette/pull/2053#issuecomment-1563663616 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dM50A simonw 9599 2023-05-26T00:32:08Z 2023-05-26T00:32:08Z OWNER

Now that I have the new View subclass from #2078 I want to use it to simplify this code.

Challenge: there are several things to consider here:

  • The /db page without ?sql= displays a list of tables in that database
  • With ?sql= it shows the query results for that query (or an error)
  • If it's a /db/name-of-canned-query it works a bit like the query page, but executes a canned query instead of the ?sql= query
  • POST /db/name-of-canned-query is support for writable canned queries
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1563285150 https://github.com/simonw/datasette/pull/2053#issuecomment-1563285150 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5dLdae simonw 9599 2023-05-25T17:48:50Z 2023-05-25T17:49:52Z OWNER

Uncommitted experimental code: diff diff --git a/datasette/views/database.py b/datasette/views/database.py index 455ebd1f..85775433 100644 --- a/datasette/views/database.py +++ b/datasette/views/database.py @@ -909,12 +909,13 @@ async def query_view( elif format_ in datasette.renderers.keys(): # Dispatch request to the correct output format renderer # (CSV is not handled here due to streaming) + print(data) result = call_with_supported_arguments( datasette.renderers[format_][0], datasette=datasette, - columns=columns, - rows=rows, - sql=sql, + columns=data["rows"][0].keys(), + rows=data["rows"], + sql='', query_name=None, database=db.name, table=None, @@ -923,7 +924,7 @@ async def query_view( # These will be deprecated in Datasette 1.0: args=request.args, data={ - "rows": rows, + "rows": data["rows"], }, # TODO what should this be? ) result = await await_me_maybe(result) diff --git a/docs/index.rst b/docs/index.rst index 5a9cc7ed..254ed3da 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -57,6 +57,7 @@ Contents settings introspection custom_templates + template_context plugins writing_plugins plugin_hooks Where docs/template_context.rst looked like this: ```rst .. _template_context:

Template context

.. currentmodule:: datasette.context

This page describes the variables made available to templates used by Datasette to render different pages of the application.

.. autoclass:: QueryContext :members: And `datasette/context.py` had this:python from dataclasses import dataclass

@dataclass class QueryContext: """ Used by the /database page when showing the results of a SQL query """ id: int "Id is a thing" rows: list[dict] "Name is another thing" ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  
1498279469 https://github.com/simonw/datasette/pull/2053#issuecomment-1498279469 https://api.github.com/repos/simonw/datasette/issues/2053 IC_kwDOBm6k_c5ZTe4t simonw 9599 2023-04-05T23:28:53Z 2023-04-05T23:28:53Z OWNER

Table errors page currently does this: json { "ok": false, "error": "no such column: blah", "status": 400, "title": "Invalid SQL" }

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
WIP new JSON for queries 1656432059  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 23.706ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows