Known issues in Trino
This topic describes the known issues for Trino in Cloudera Data Warehouse on cloud, version 2025.0.21.0-185.
Known issues identified in the March 31, 2026 release
- DWX-22835/DWX-22745: Trino is unable to access Snowflake schemas created in lowercase
- When accessing Snowflake schemas or tables through Trino
(using Cloudera Data Explorer (Hue) or the Trino CLI), queries may fail with a
TABLE_NOT_FOUNDerror, even if the table is visible in Data Explorer. This issue can occur when objects in Snowflake were created using explicit lowercase names with double quotes. In Snowflake, if you create a schema or table in lowercase without double quotes (for example, create schema my_schema), Snowflake automatically stores the name in uppercase (MY_SCHEMA). If you create it using lowercase within double quotes (for example, create schema "my_schema"), the object is stored in lowercase. - DWX-22659: cm_trino Ranger service user impersonation defaults should be more restrictive
- The Ranger Trino authorization service
(
cm_trino) is configured with a default security posture that is too permissive. By default, the Ranger configuration parameter, ranger.default.policy.groups is populated with a specific administrative group (_c_ranger_admins_…). As a result, the default Ranger policy, "all - trinouser" allows all users within this administrative group to impersonate any other user. - DWX-22360: ArrayIndexOutOfBoundsException when querying large MariaDB tables
- When querying extremely large tables (containing hundreds of
millions of rows or more) using the MariaDB connector, users may encounter an
ArrayIndexOutOfBoundsException. - DWX-22578: Large queries fail with "Per-Node Memory Limit Exceeded" despite spilling enabled
- Large queries involving heavy joins may fail with a "per-node memory limit exceeded" error, even if the "spill-to-disk" feature is properly enabled. This issue is primarily caused by missing or inaccurate table statistics, particularly on join columns. Without accurate statistics, the Trino query optimizer may select suboptimal join strategies that cause memory usage to spike rapidly, hitting the node limit before the spill mechanism can react.
- DWX-22392: Trino compute update fails due to entity being leased by a background operation
- Trino auto-suspend and auto-scaling processes do not account for manual operations performed by users, such as Start or Update. To prevent conflicts between manual and automatic operations, a 1-minute delay is introduced, during which the lease of the Trino Virtual Warehouse is held after a manual operation is executed. During this delay, attempting another manual operation may result in the error message: "Compute entity is currently ‘leased’ by another internal operation."
- DWX-21891: Delayed metadata loading in Cloudera Data Explorer (Hue) for auto-suspended Trino Virtual Warehouses
- When a Data Explorer session starts, Data Explorer sends requests to the Trino coordinator to fetch database metadata. For large metadata tables, the coordinator divides the metadata request into "splits" and assigns them to worker nodes. However, if the Trino Virtual Warehouse is auto-suspended, all worker nodes are stopped, causing the metadata request to be queued. This triggers the auto-start process to spin up the worker nodes. Since it can take a few minutes to start worker nodes in a Kubernetes cluster, Data Explorer may take a long time to load metadata for an auto-suspended Trino Virtual Warehouse.
- DWX-21977: Cloudera Data Warehouse allows the deletion of a federation connector associated with a Trino Virtual Warehouse
- Cloudera Data Warehouse allows you to delete a federation connector associated with a Trino Virtual Warehouse. However, the Virtual Warehouse can continue using the connector until it is restarted. This behavior occurs because the Virtual Warehouse stores connector information in the local pods. This issue, referred to as a stale connector issue, may occur if a user updates or deletes a connector already in use by the Virtual Warehouse
- CDPD-76644:
information_schema.table_privilegesmetadata is unsupported - Querying the
information_schema.table_privilegesaccess control metadata for ranger is unsupported and aTrinoExceptionis displayed indicating that the connector does not support table privileges. - CDPD-76643/CDPD-76645: SET AUTHORIZATION SQL statement does not modify Ranger permissions
- The following SQL statements do not dynamically modify the
Ranger permissions:
CREATE SCHEMA test_createschema_authorization_user AUTHORIZATION user; ALTER SCHEMA test_schema_authorization_user SET AUTHORIZATION user; - CDPD-68246: Roles related operations are not authorized by Ranger Trino plugin
- The SHOW ROLES, SHOW CURRENT ROLES, and SHOW ROLE GRANT statements are not authorized by Ranger Trino plugin. Users can run these commands without any policy and audits are not generated for these statements.
- CDPD-81960: Row filter policy for same resource and same user across different Ranger services is not supported
- When you have row filter policy for same resource and same
user in both
cm_trinoandcm_hive(Hadoop SQL) Ranger services and the row filtering conditions are different, then on querying the table using that user returns empty response in thetrino-cli. - DWX-19626: Number of rows returned by Trino does not match with the Hive query results
- If you are running an exact same query on both Hive and Trino
engine that involves dividing integers, it was observed that the query results returned by
Trino does not match with the query results returned by the Hive engine. This is due to a
default behavior of Trino when dividing two integers. Trino does not cast the result into a
FLOATdata type.
