Skip to content

Commit

Permalink
Merge pull request #1320 from burnash/feature/release_6_0_0-updated
Browse files Browse the repository at this point in the history
Merge `master` into `feature/release_6_0_0`
  • Loading branch information
alifeee committed Oct 26, 2023
2 parents 15f5d6e + 6b12458 commit ba7bfc6
Show file tree
Hide file tree
Showing 112 changed files with 77,029 additions and 7,944 deletions.
10 changes: 10 additions & 0 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,16 @@

- Please follow [Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/).

## Tests

To run tests, add your credentials to `tests/creds.json` and run

```bash
GS_CREDS_FILENAME="tests/creds.json" GS_RECORD_MODE="all" tox -e py -- -k "<specific test to run>" -v -s
```

For more information on tests, see below.

## CI checks

If the [test](#run-tests-offline) or [lint](#lint) commands fail, the CI will fail, and you won't be able to merge your changes into gspread.
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
matrix:
python: ["3.8", "3.9", "3.10", "3.11", "3.x"]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
contents: write
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup python
uses: actions/setup-python@v4
with:
Expand Down
32 changes: 31 additions & 1 deletion HISTORY.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,38 @@
Release History
===============

5.11.3 (2023-09-29)
-------------------

* Fix list_spreadsheet_files return value by @mephinet in https://github.com/burnash/gspread/pull/1308

5.11.2 (2023-09-18)
-------------------

* Fix merge_combined_cells in get_values (AND 5.11.2 RELEASE) by @alifeee in https://github.com/burnash/gspread/pull/1299

5.11.1 (2023-09-06)
-------------------

* Bump actions/checkout from 3 to 4 by @dependabot in https://github.com/burnash/gspread/pull/1288
* remove Drive API access on Spreadsheet init (FIX - VERSION 5.11.1) by @alifeee in https://github.com/burnash/gspread/pull/1291

5.11.0 (2023-09-04)
-------------------

* add docs/build to .gitignore by @alifeee in https://github.com/burnash/gspread/pull/1246
* add release process to CONTRIBUTING.md by @alifeee in https://github.com/burnash/gspread/pull/1247
* Update/clean readme badges by @lavigne958 in https://github.com/burnash/gspread/pull/1251
* add test_fill_gaps and docstring for fill_gaps by @alifeee in https://github.com/burnash/gspread/pull/1256
* Remove API calls from `creationTime`/`lastUpdateTime` by @alifeee in https://github.com/burnash/gspread/pull/1255
* Fix Worksheet ID Type Inconsistencies by @FlantasticDan in https://github.com/burnash/gspread/pull/1269
* Add `column_count` prop as well as `col_count` by @alifeee in https://github.com/burnash/gspread/pull/1274
* Add required kwargs with no default value by @lavigne958 in https://github.com/burnash/gspread/pull/1271
* Add deprecation warnings for colors by @alifeee in https://github.com/burnash/gspread/pull/1278
* Add better Exceptions on opening spreadsheets by @alifeee in https://github.com/burnash/gspread/pull/1277

5.10.0 (2023-06-29)
------------------
-------------------

* Fix rows_auto_resize in worksheet.py by removing redundant self by @MagicMc23 in https://github.com/burnash/gspread/pull/1194
* Add deprecation warning for future release 6.0.x by @lavigne958 in https://github.com/burnash/gspread/pull/1195
Expand Down
1 change: 0 additions & 1 deletion docs/api/exceptions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ Exceptions


.. autoexception:: gspread.exceptions.APIError
.. autoexception:: gspread.exceptions.CellNotFound
.. autoexception:: gspread.exceptions.GSpreadException
.. autoexception:: gspread.exceptions.IncorrectCellLabel
.. autoexception:: gspread.exceptions.InvalidInputValue
Expand Down
1 change: 0 additions & 1 deletion gspread/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
from .cell import Cell
from .client import Client
from .exceptions import (
CellNotFound,
GSpreadException,
IncorrectCellLabel,
NoValidUrlKeyFound,
Expand Down
59 changes: 41 additions & 18 deletions gspread/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,13 @@
"""

from typing import Any, Dict, List, Optional, Union
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Tuple, Union

from google.auth.credentials import Credentials
from requests import Response

from .exceptions import SpreadsheetNotFound, UnSupportedExportFormat
from .exceptions import APIError, SpreadsheetNotFound, UnSupportedExportFormat
from .http_client import HTTPClient, HTTPClientType, ParamsType
from .spreadsheet import Spreadsheet
from .urls import (
Expand Down Expand Up @@ -58,34 +59,47 @@ def list_spreadsheet_files(
The parameter ``folder_id`` can be obtained from the URL when looking at
a folder in a web browser as follow:
``https://drive.google.com/drive/u/0/folders/<folder_id>``
:returns: a list of dicts containing the keys id, name, createdTime and modifiedTime.
"""
files, _ = self._list_spreadsheet_files(title=title, folder_id=folder_id)
return files

def _list_spreadsheet_files(
self, title=None, folder_id=None
) -> Tuple[List[Dict[str, Any]], Response]:
files = []
page_token = ""
url = DRIVE_FILES_API_V3_URL

q = 'mimeType="{}"'.format(MimeType.google_sheets)
query = f'mimeType="{MimeType.google_sheets}"'
if title:
q += ' and name = "{}"'.format(title)
query += f' and name = "{title}"'
if folder_id:
q += ' and parents in "{}"'.format(folder_id)
query += f' and parents in "{folder_id}"'

params: ParamsType = {
"q": q,
"q": query,
"pageSize": 1000,
"supportsAllDrives": True,
"includeItemsFromAllDrives": True,
"fields": "kind,nextPageToken,files(id,name,createdTime,modifiedTime)",
}

while page_token is not None:
while True:
if page_token:
params["pageToken"] = page_token

res = self.http_client.request("get", url, params=params).json()
files.extend(res["files"])
page_token = res.get("nextPageToken", None)
response = self.http_client.request("get", url, params=params)
response_json = response.json()
files.extend(response_json["files"])

return files
page_token = response_json.get("nextPageToken", None)

if page_token is None:
break

return files, response

def open(self, title: str, folder_id: Optional[str] = None) -> Spreadsheet:
"""Opens a spreadsheet.
Expand All @@ -103,18 +117,19 @@ def open(self, title: str, folder_id: Optional[str] = None) -> Spreadsheet:
>>> gc.open('My fancy spreadsheet')
"""
spreadsheet_files, response = self._list_spreadsheet_files(title, folder_id)
try:
properties = finditem(
lambda x: x["name"] == title,
self.list_spreadsheet_files(title, folder_id),
spreadsheet_files,
)
except StopIteration as ex:
raise SpreadsheetNotFound(response) from ex

# Drive uses different terminology
properties["title"] = properties["name"]
# Drive uses different terminology
properties["title"] = properties["name"]

return Spreadsheet(self.http_client, properties)
except StopIteration:
raise SpreadsheetNotFound
return Spreadsheet(self.http_client, properties)

def open_by_key(self, key: str) -> Spreadsheet:
"""Opens a spreadsheet specified by `key` (a.k.a Spreadsheet ID).
Expand All @@ -124,7 +139,15 @@ def open_by_key(self, key: str) -> Spreadsheet:
>>> gc.open_by_key('0BmgG6nO_6dprdS1MN3d3MkdPa142WFRrdnRRUWl1UFE')
"""
return Spreadsheet(self.http_client, {"id": key})
try:
spreadsheet = Spreadsheet(self.http_client, {"id": key})
except APIError as ex:
if ex.response.status_code == HTTPStatus.NOT_FOUND:
raise SpreadsheetNotFound(ex.response) from ex
if ex.response.status_code == HTTPStatus.FORBIDDEN:
raise PermissionError from ex
raise ex
return spreadsheet

def open_by_url(self, url: str) -> Spreadsheet:
"""Opens a spreadsheet specified by `url`.
Expand Down
15 changes: 7 additions & 8 deletions gspread/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,10 @@ class GSpreadException(Exception):
"""A base class for gspread's exceptions."""


class SpreadsheetNotFound(GSpreadException):
"""Trying to open non-existent or inaccessible spreadsheet."""


class WorksheetNotFound(GSpreadException):
"""Trying to open non-existent or inaccessible worksheet."""


class CellNotFound(GSpreadException):
"""Cell lookup exception."""


class NoValidUrlKeyFound(GSpreadException):
"""No valid key found in URL."""

Expand All @@ -44,6 +36,9 @@ class InvalidInputValue(GSpreadException):


class APIError(GSpreadException):
"""Errors coming from the API itself,
such as when we attempt to retrieve things that don't exist."""

def __init__(self, response: Response):
super().__init__(self._extract_text(response))
self.response: Response = response
Expand All @@ -61,3 +56,7 @@ def _text_from_detail(
return dict(errors["error"])
except (AttributeError, KeyError, ValueError):
return None


class SpreadsheetNotFound(GSpreadException):
"""Trying to open non-existent or inaccessible spreadsheet."""
25 changes: 22 additions & 3 deletions gspread/spreadsheet.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
"""

import warnings
from typing import Union

from .exceptions import WorksheetNotFound
Expand All @@ -24,9 +25,6 @@ def __init__(self, http_client, properties):
metadata = self.fetch_sheet_metadata()
self._properties.update(metadata["properties"])

drive_metadata = self.client.get_file_drive_metadata(self._properties["id"])
self._properties.update(drive_metadata)

@property
def id(self):
"""Spreadsheet ID."""
Expand All @@ -45,8 +43,23 @@ def url(self):
@property
def creationTime(self):
"""Spreadsheet Creation time."""
if "createdTime" not in self._properties:
self.update_drive_metadata()
return self._properties["createdTime"]

@property
def lastUpdateTime(self):
"""Spreadsheet last updated time.
Only updated on initialisation.
For actual last updated time, use get_lastUpdateTime()."""
warnings.warn(
"worksheet.lastUpdateTime is deprecated, please use worksheet.get_lastUpdateTime()",
category=DeprecationWarning,
)
if "modifiedTime" not in self._properties:
self.update_drive_metadata()
return self._properties["modifiedTime"]

@property
def timezone(self):
"""Spreadsheet timeZone"""
Expand Down Expand Up @@ -699,3 +712,9 @@ def get_lastUpdateTime(self) -> str:
"""Get the lastUpdateTime metadata from the Drive API."""
metadata = self.client.get_file_drive_metadata(self.id)
return metadata["modifiedTime"]

def update_drive_metadata(self) -> None:
"""Fetches the drive metadata from the Drive API
and updates the cached values in _properties dict."""
drive_metadata = self.client.get_file_drive_metadata(self._properties["id"])
self._properties.update(drive_metadata)
34 changes: 25 additions & 9 deletions gspread/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -553,13 +553,16 @@ def wid_to_gid(wid: str) -> str:
return str(int(widval, 36) ^ xorval)


def rightpad(row: List[Any], max_len: int) -> List[Any]:
def rightpad(row: List[Any], max_len: int, padding_value: Any = "") -> List[Any]:
pad_len = max_len - len(row)
return row + ([""] * pad_len) if pad_len != 0 else row
return row + ([padding_value] * pad_len) if pad_len != 0 else row


def fill_gaps(
L: List[List[Any]], rows: Optional[int] = None, cols: Optional[int] = None
L: List[List[Any]],
rows: Optional[int] = None,
cols: Optional[int] = None,
padding_value: Any = "",
) -> List[List[Any]]:
"""Fill gaps in a list of lists.
e.g.,::
Expand All @@ -576,10 +579,12 @@ def fill_gaps(
:param L: List of lists to fill gaps in.
:param rows: Number of rows to fill.
:param cols: Number of columns to fill.
:param padding_value: Default value to fill gaps with.
:type L: list[list[T]]
:type rows: int
:type cols: int
:type padding_value: T
:return: List of lists with gaps filled.
:rtype: list[list[T]]:
Expand All @@ -593,7 +598,7 @@ def fill_gaps(
if pad_rows:
L = L + ([[]] * pad_rows)

return [rightpad(row, max_cols) for row in L]
return [rightpad(row, max_cols, padding_value=padding_value) for row in L]
except ValueError:
return [[]]

Expand All @@ -602,7 +607,7 @@ def cell_list_to_rect(cell_list: List["Cell"]) -> List[List[Optional[str]]]:
if not cell_list:
return []

rows: Dict[int, Dict[int, str]] = defaultdict(lambda: {})
rows: Dict[int, Dict[int, str]] = defaultdict(dict)

row_offset = min(c.row for c in cell_list)
col_offset = min(c.col for c in cell_list)
Expand Down Expand Up @@ -698,22 +703,33 @@ def combined_merge_values(worksheet_metadata, values):
]
if the top-left four cells are merged.
:param worksheet_metadata: The metadata returned by the Google API for the worksheet. Should have a "merges" key.
:param worksheet_metadata: The metadata returned by the Google API for the worksheet.
Should have a "merges" key.
:param values: The values returned by the Google API for the worksheet. 2D array.
"""
merges = worksheet_metadata.get("merges", [])
# each merge has "startRowIndex", "endRowIndex", "startColumnIndex", "endColumnIndex
new_values = [[v for v in row] for row in values]
new_values = [list(row) for row in values]

# max row and column indices
max_row_index = len(values) - 1
max_col_index = len(values[0]) - 1

for merge in merges:
start_row, end_row = merge["startRowIndex"], merge["endRowIndex"]
start_col, end_col = merge["startColumnIndex"], merge["endColumnIndex"]
# if out of bounds, ignore
if start_row > max_row_index or start_col > max_col_index:
continue
top_left_value = values[start_row][start_col]
row_indices = range(start_row, end_row)
col_indices = range(start_col, end_col)
for row_index in row_indices:
for col_index in col_indices:
# if out of bounds, ignore
if row_index > max_row_index or col_index > max_col_index:
continue
new_values[row_index][col_index] = top_left_value

return new_values
Expand Down Expand Up @@ -761,8 +777,8 @@ def convert_hex_to_colors_dict(hex_color: str) -> Mapping[str, float]:
}

return rgb_color
except ValueError:
raise ValueError(f"Invalid character in hex color string: #{hex_color}")
except ValueError as ex:
raise ValueError(f"Invalid character in hex color string: #{hex_color}") from ex


def convert_colors_to_hex_value(
Expand Down
Loading

0 comments on commit ba7bfc6

Please sign in to comment.