home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

16 rows where type = "issue" and user = 82988 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, author_association, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 12
  • open 4

repo 2

  • datasette 11
  • sqlite-utils 5

type 1

  • issue · 16 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
377155320 MDU6SXNzdWUzNzcxNTUzMjA= 370 Integration with JupyterLab psychemedia 82988 open 0     4 2018-11-04T13:57:13Z 2022-09-29T08:17:47Z   CONTRIBUTOR  

I just watched a demo video for the JupyterLab Chart Editor which wraps the plotly chart editor app in a JupyterLab panel and lets you open a plotly chart JSON file in that editor. Essentially, it pops an HTML app into a panel in JupyterLab, and I think registers the app as a file viewer for a particular file type. (I'm not completely taken by it, tbh, because it means you can do irreproducible things to the chart definition file, but that's another issue).

JupyterLab extensions can also open files from a dialogue as the iframe/html previewer shows: https://github.com/timkpaine/jupyterlab_iframe.

This made me wonder about what datasette integration with JupyterLab might do.

For example, by right-clicking on a CSV file (for which there is already a CSV table view) in the file browser, offer a View / Run as datasette file viewer option that will:

  • run the CSV file through csvs-to-sqlite;
  • launch the datasette server and display the datasette view in a JupyterLab panel.

(? Create a new SQLite db for each CSV file and launch each datasette view on a new port? Or have a JupyterLab (session?) SQLite db that stores all datasette viewed CSVs and runs on a single port?)

As a freebie, the datasette API would allow you to run efficient SQL queries against the file eg using using pandas.read_sql() queries in a notebook in the same space.

Related:

  • JupyterLab extensions docs
  • a cookiecutter for wrting JupyterLab extensions using Javascript
  • a cookiecutter for writing JupyterLab extensions using Typescript
  • tutorial: Let’s Make an xkcd JupyterLab Extension
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/370/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1128466114 I_kwDOCGYnMM5DQwbC 406 Creating tables with custom datatypes psychemedia 82988 open 0     5 2022-02-09T12:16:31Z 2022-09-15T18:13:50Z   NONE  

Via https://stackoverflow.com/a/18622264/454773 I note the ability to register custom handlers for novel datatypes that can map into and out of things like sqlite BLOBs.

From a quick look and a quick play, I didn't spot a way to do this in sqlite_utils?

For example:

```python

Via https://stackoverflow.com/a/18622264/454773

import sqlite3 import numpy as np import io

def adapt_array(arr): """ http://stackoverflow.com/a/31312102/190597 (SoulNibbler) """ out = io.BytesIO() np.save(out, arr) out.seek(0) return sqlite3.Binary(out.read())

def convert_array(text): out = io.BytesIO(text) out.seek(0) return np.load(out)

Converts np.array to TEXT when inserting

sqlite3.register_adapter(np.ndarray, adapt_array)

Converts TEXT to np.array when selecting

sqlite3.register_converter("array", convert_array) ```

```python from sqlite_utils import Database db = Database('test.db')

Reset the database connection to used the parsed datatype

sqlite_utils doesn't seem to support eg:

Database('test.db', detect_types=sqlite3.PARSE_DECLTYPES)

db.conn = sqlite3.connect(db_name, detect_types=sqlite3.PARSE_DECLTYPES)

Create a table the old fashioned way

but using the new custom data type

vector_table_create = """ CREATE TABLE dummy (title TEXT, vector array ); """

cur = db.conn.cursor() cur.execute(vector_table_create)

sqlite_utils doesn't appear to support custom types (yet?!)

The following errors on the "array" datatype

""" db["dummy"].create({ "title": str, "vector": "array", }) """ ```

We can then add / retrieve records from the database where the datatype of the vector field is a custom registered array type (which is to say, a numpy array):

```python import numpy as np

db["dummy"].insert({'title':"test1", 'vector':np.array([1,2,3])})

for row in db.query("SELECT * FROM dummy"): print(row['title'], row['vector'], type(row['vector']))

""" test1 [1 2 3] <class 'numpy.ndarray'> """ ```

It would be handy to be able to do this idiomatically in sqlite_utils.

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/406/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1063388037 I_kwDOCGYnMM4_YgOF 343 Provide function to generate hash_id from specified columns psychemedia 82988 closed 0     4 2021-11-25T10:12:12Z 2022-03-02T04:25:25Z 2022-03-02T04:25:25Z NONE  

Hi

I note that you define _hash() to create a hash_id from non-id column values in a table here.

It would be useful to be able to call a complementary function to generate a corresponding _id from a subset of specified columns when adding items to another table, eg to support the creation of foreign keys.

Or is there a better pattern for doing that?

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/343/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
336936010 MDU6SXNzdWUzMzY5MzYwMTA= 331 Datasette throws error when loading spatialite db without extension loaded psychemedia 82988 closed 0     2 2018-06-29T09:51:14Z 2022-01-20T21:29:40Z 2018-07-10T15:13:36Z CONTRIBUTOR  

When starting datasette on a SpatialLite database without loading the SpatiaLite extension (using eg --load-extension=/usr/local/lib/mod_spatialite.dylib) an error is thrown and the server fails to start:

datasette -p 8003 adminboundaries.db Serve! files=('adminboundaries.db',) on port 8003 Traceback (most recent call last): File "/Users/ajh59/anaconda3/bin/datasette", line 11, in <module> sys.exit(cli()) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/cli.py", line 552, in serve ds.inspect() File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/app.py", line 273, in inspect "tables": inspect_tables(conn, self.metadata.get("databases", {}).get(name, {})) File "/Users/ajh59/anaconda3/lib/python3.6/site-packages/datasette/inspect.py", line 79, in inspect_tables "PRAGMA table_info({});".format(escape_sqlite(table)) sqlite3.OperationalError: no such module: VirtualSpatialIndex

It would be nice to trap this and return a message saying something like:

``` It looks like you're trying to load a SpatiaLite database? Make sure you load in the SpatiaLite extension when starting datasette.

Read more: https://datasette.readthedocs.io/en/latest/spatialite.html ```

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/331/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
377156339 MDU6SXNzdWUzNzcxNTYzMzk= 371 datasette publish digitalocean plugin psychemedia 82988 closed 0     3 2018-11-04T14:07:41Z 2021-01-04T20:14:28Z 2021-01-04T20:14:28Z CONTRIBUTOR  

Provide support for launching datasette on Digital Ocean.

Example: Deploy Docker containers into Digital Ocean.

Digital Ocean also has a preconfigured VM running Docker that can be launched from the command line via the Digital Ocean API: Docker One-Click Application.

Related: - Launching containers in Digital Ocean servers running docker: How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 16.04 - How To Use Doctl, the Official DigitalOcean Command-Line Client

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/371/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
492153532 MDU6SXNzdWU0OTIxNTM1MzI= 573 Exposing Datasette via Jupyter-server-proxy psychemedia 82988 closed 0     3 2019-09-11T10:32:36Z 2020-03-26T09:41:30Z 2020-03-26T09:41:30Z CONTRIBUTOR  

It is possible to expose a running datasette service in a Jupyter environment such as a MyBinder environment using the jupyter-server-proxy.

For example, using this demo Binder which has the server proxy installed, we can then upload a simple test database from the notebook homepage, from a Jupyter termianl install datasette and set it running against the test db on eg port 8001 and then view it via the path proxy/8001.

Clicking links results in 404s though because the datasette links aren't relative to the current path?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/573/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
545407916 MDU6SXNzdWU1NDU0MDc5MTY= 73 upsert_all() throws issue when upserting to empty table psychemedia 82988 closed 0     6 2020-01-05T11:58:57Z 2020-01-31T14:21:09Z 2020-01-05T17:20:18Z NONE  

If I try to add a list of dicts to an empty table using upsert_all, I get an error:

```python import sqlite3 from sqlite_utils import Database import pandas as pd

conx = sqlite3.connect(':memory') cx = conx.cursor() cx.executescript('CREATE TABLE "test" ("Col1" TEXT);')

q="SELECT * FROM test;" pd.read_sql(q, conx) #shows empty table

db = Database(conx) db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}])


TypeError Traceback (most recent call last) <ipython-input-74-8c26d93d7587> in <module> 1 db = Database(conx) ----> 2 db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}])

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, extracts) 1157 alter=alter, 1158 extracts=extracts, -> 1159 upsert=True, 1160 ) 1161

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, column_order, not_null, defaults, batch_size, hash_id, alter, ignore, replace, extracts, upsert) 1040 sql = "INSERT OR IGNORE INTO {table} VALUES({pk_placeholders});".format( 1041 table=self.name, -> 1042 pks=", ".join(["[{}]".format(p) for p in pks]), 1043 pk_placeholders=", ".join(["?" for p in pks]), 1044 )

TypeError: 'NoneType' object is not iterable

```

A hacky workaround in use is:

python try: db['test'].upsert_all([{'Col1':'a'},{'Col1':'b'}]) except: db['test'].insert_all([{'Col1':'a'},{'Col1':'b'}])

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/73/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
527710055 MDU6SXNzdWU1Mjc3MTAwNTU= 640 Nicer error message for heroku publish name clash psychemedia 82988 open 0     1 2019-11-24T14:57:07Z 2019-12-06T07:19:34Z   CONTRIBUTOR  

If you try to publish to Heroku using no set name (i.e. the default datasette name) and a project already exists under that name, you get a meaningful error report on the first line followed by Py error messages that drown it out:

Creating datasette... ! ▸ Name datasette is already taken Traceback (most recent call last): File "/usr/local/bin/datasette", line 10, in <module> sys.exit(cli()) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/Users/NNNNN/Library/Python/3.7/lib/python/site-packages/datasette/publish/heroku.py", line 124, in heroku create_output = check_output(cmd).decode("utf8") File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 411, in check_output **kwargs).stdout File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['heroku', 'apps:create', 'datasette', '--json']' returned non-zero exit status 1.

It would be neater if:

  • the Py error message was caught;
  • the report suggested setting a project name using -n etc.

It may also be useful to provide a command to list the current names that are being used, which I assume is available via a Heroku call?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/640/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
291639118 MDU6SXNzdWUyOTE2MzkxMTg= 183 Custom Queries - escaping strings psychemedia 82988 closed 0     2 2018-01-25T16:49:13Z 2019-06-24T06:45:07Z 2019-06-24T06:45:07Z CONTRIBUTOR  

If a SQLite table column name contains spaces, they are usually referred to in double quotes:

SELECT * FROM mytable WHERE "gappy column name"="my value";

In the JSON metadata file, this is passed by escaping the double quotes:

"queries": {"my query": "SELECT * FROM mytable WHERE \"gappy column name\"=\"my value\";"}

When specifying a custom query in metadata.json using double quotes, these are then rendered in the datasette query box using single quotes:

SELECT * FROM mytable WHERE 'gappy column name'='my value';

which does not work.

Alternatively, a valid custom query can be passed using backticks (`) to quote the column name and single (unescaped) quotes for the matched value:

"queries": {"my query": "SELECT * FROM mytable WHERE `gappy column name`='my value';"}

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/183/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
432870248 MDU6SXNzdWU0MzI4NzAyNDg= 431 Datasette doesn't reload when database file changes psychemedia 82988 closed 0     3 2019-04-13T16:50:43Z 2019-05-02T05:13:55Z 2019-05-02T05:13:54Z CONTRIBUTOR  

My understanding of the --reload option was that if the database file changed datasette would automatically reload.

I'm running on a Mac and from the datasette UI queries don't seem to be picking up data in a newly changed db (I checked the db timestamp - it certainly updated).

I was also expecting to see some sort of log statement in the datasette logging to say that it had detected a file change and restarted, but don't see anything there?

Will try to check on an Ubuntu box when I get a chance to see if this is a Mac thing.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/431/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
403922644 MDU6SXNzdWU0MDM5MjI2NDQ= 8 Problems handling column names containing spaces or - psychemedia 82988 closed 0     3 2019-01-28T17:23:28Z 2019-04-14T15:29:33Z 2019-02-23T21:09:03Z NONE  

Irrrespective of whether using column names containing a space or - character is good practice, SQLite does allow it, but sqlite-utils throws an error in the following cases:

```python from sqlite_utils import Database

dbname = 'test.db' DB = Database(sqlite3.connect(dbname))

import pandas as pd df = pd.DataFrame({'col1':range(3), 'col2':range(3)})

Convert pandas dataframe to appropriate list/dict format

DB['test1'].insert_all( df.to_dict(orient='records') )

Works fine

```

However:

python df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) DB['test1'].insert_all(df.to_dict(orient='records'))

throws:

```

OperationalError Traceback (most recent call last) <ipython-input-27-070b758f4f92> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col 1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records'))

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid

OperationalError: near "1": syntax error ```

and:

python df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) DB['test1'].upsert_all(df.to_dict(orient='records'))

results in:

```

OperationalError Traceback (most recent call last) <ipython-input-28-654523549d20> in <module>() 1 import pandas as pd 2 df = pd.DataFrame({'col-1':range(3), 'col2':range(3)}) ----> 3 DB['test1'].insert_all(df.to_dict(orient='records'))

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid

OperationalError: near "-": syntax error ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/8/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
415575624 MDU6SXNzdWU0MTU1NzU2MjQ= 414 datasette requires specific version of Click psychemedia 82988 closed 0     1 2019-02-28T11:24:59Z 2019-03-15T04:42:13Z 2019-03-15T04:42:13Z CONTRIBUTOR  

Is datasette beholden to version click==6.7?

Current release is at 7.0. Can the requirement be liberalised, eg to >=6.7?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/414/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
411066700 MDU6SXNzdWU0MTEwNjY3MDA= 10 Error in upsert if column named 'order' psychemedia 82988 closed 0     1 2019-02-16T12:05:18Z 2019-02-24T16:55:38Z 2019-02-24T16:55:37Z NONE  

The following works fine: ``` connX = sqlite3.connect('DELME.db', timeout=10)

dfX=pd.DataFrame({'col1':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ```

But if a column is named order: ``` connX = sqlite3.connect('DELME.db', timeout=10)

dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) DBX = Database(connX) DBX['test'].upsert_all(dfX.to_dict(orient='records')) ```

it throws an error:

```

OperationalError Traceback (most recent call last) <ipython-input-130-7dba33cd806c> in <module> 3 dfX=pd.DataFrame({'order':range(3),'col2':range(3)}) 4 DBX = Database(connX) ----> 5 DBX['test'].upsert_all(dfX.to_dict(orient='records'))

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in upsert_all(self, records, pk, foreign_keys, column_order) 347 foreign_keys=foreign_keys, 348 upsert=True, --> 349 column_order=column_order, 350 ) 351

/usr/local/lib/python3.7/site-packages/sqlite_utils/db.py in insert_all(self, records, pk, foreign_keys, upsert, batch_size, column_order) 327 jsonify_if_needed(record.get(key, None)) for key in all_columns 328 ) --> 329 result = self.db.conn.execute(sql, values) 330 self.db.conn.commit() 331 self.last_id = result.lastrowid

OperationalError: near "order": syntax error ```

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/10/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
377166793 MDU6SXNzdWUzNzcxNjY3OTM= 372 Docker build tools psychemedia 82988 open 0     0 2018-11-04T16:02:35Z 2018-11-04T16:02:35Z   CONTRIBUTOR  

In terms of small pieces lightly joined, I note that there are several tools starting to appear for building generating Dockerfiles and building Docker containers from simpler components such as requirements.txt files.

If plugin/extensions builders want to include additional packages, then things like incremental builds of composable builds that add additional items into a base datasette container may be required.

Examples of Dockerfile generators / container builders:

  • openshift/source-to-image (s2i)
  • jupyter/repo2docker
  • stencila/dockter

Discussions / threads (via Binderhub gitter) on: - why repo2docker not s2i - why dockter not repo2docker - composability in s2i

Relates to things like:

  • https://github.com/simonw/datasette/pull/280
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/372/reactions",
    "total_count": 2,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 2,
    "rocket": 0,
    "eyes": 0
}
   
336924199 MDU6SXNzdWUzMzY5MjQxOTk= 330 Limit text display in cells containing large amounts of text psychemedia 82988 closed 0     4 2018-06-29T09:15:22Z 2018-07-24T04:53:20Z 2018-07-10T16:20:48Z CONTRIBUTOR  

The default preview of a database shows all columns (is the row count limited?) which is fine in many cases but can take a long time to load / offer a large overhead if the table is a SpatiaLite table containing geometry columns that include large shapefiles.

Would it make sense to have a setting that can limit the amount of text displayed in any given cell in the table preview, or (less useful?) suppress (with notification) the display of overlong columns unless enabled by the user?

An issue then arises if a user does want to see all the text in a cell:

1) for a particular cell; 2) for every cell in the table; 3) for all cells in a particular column or columns

(I haven't checked but what if a column contains e.g. raw image data? Does this display as raw data? Or can this be rendered in a context aware way as an image preview? I guess a custom template would be one way to do that?)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/330/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed
286938589 MDU6SXNzdWUyODY5Mzg1ODk= 177 Publishing to Heroku - metadata file not uploaded? psychemedia 82988 closed 0     0 2018-01-09T01:04:31Z 2018-01-25T16:45:32Z 2018-01-25T16:45:32Z CONTRIBUTOR  

Trying to run datasette (version 0.14) on Heroku with a metadata.json doesn't seem to be picking up the metadata.json file?

On a Mac with dodgy tar support:

▸ Couldn't detect GNU tar. Builds could fail due to decompression errors ▸ See ▸ https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive ▸ Please install it, or specify the '--tar' option ▸ Falling back to node's built-in compressor

Could that be causing the issue?

Also, I'm not seeing custom query links anywhere obvious when I run the metadata file with a local datasette server?

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/177/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 51.427ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows