Query History Examples

Trace and Record Expensive Queries

The Query History feature can also be used to capture queries that take longer to run than expected. For the sake of illustration, imagine that a given query is taking one second to complete. Events like these can be captured over time, thereby yielding a collection of all time-intensive queries that have occurred over a given period. This will provide better insight into which workloads are more demanding on the cluster and should therefore be optimized.

  1. Set the events queue size, trace_events_queue_size, to 1,600,000 bytes.

    SET GLOBAL trace_events_queue_size = 1600000;
  2. Enable the Query_completion event type to look for queries that take over a second to run.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 1000);
  3. View the Query History dashboard. After a few minutes, those queries that take over a second to run will be displayed, from the most time-intensive to the least.

    A screenshot of the Query History dashboard ordered by query runtime, descending.
  4. Let the collection of queries run for a few hours, ideally during a peak usage period. Doing so will reveal expensive queries in the Query History dashboard that can then be analyzed and optimized with query optimization tools such as Visual Explain.

    Refer to the View and Optimize a Workload in Real-Time section below for an example of how to optimize expensive queries.

Resolve an Unexpected Slowdown

For the sake of illustration, imagine that the cluster monitoring dashboards revealed an uncharacteristic system slowdown that occurred during a specific period, where the CPU and memory utilization spiked significantly. The Query History dashboard can be harnessed to determine if this slowdown becomes a trend and, if it does, how to identify which queries were running during that time

  1. Set the events queue size, trace_events_queue_size, to 1,600,000 bytes.

    SET GLOBAL trace_events_queue_size = 1600000;
  2. Enable the Query_completion event type to look for queries that take over a second to run.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 1000);
  3. To see if there are any queries that require immediate attention, view the Query History dashboard. After a few minutes, those queries that take over a second to run will be displayed, from the most time-intensive to the least.

    A screenshot of the Query History dashboard ordered by query runtime, descending.
  4. Select one of these queries and profile the query in Visual Explain. Visual Explain will provide suggestions for optimizing each query. Repeat this process for all queries that take over a second to run.

  5. Let the collection of queries run for a few hours, ideally during a peak usage period. Then, view the cluster monitoring dashboards to see if this system slowdown reoccurred. If it had, review the Query History dashboard to determine if any long-running queries were the cause, or were at least a contributing factor. If long-running queries are present, Visual Explain can be used to optimize them.

    Refer to the View and Optimize a Workload in Real-Time section below for an example of how to optimize expensive queries.

View Query Runtimes Over Time

Filtering by a specific activity_name, which is a common identifier of similar query shapes, can be used to better understand the distribution or change of a query’s runtime over multiple time periods, or when different parameters are used in a query.

  1. Set the events queue size, trace_events_queue_size, to 1,600,000 bytes.

    SET GLOBAL trace_events_queue_size = 1600000;
  2. Enable the Query_completion event type to look for queries that take over 5 milliseconds to run.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 5);
  3. View the Query History dashboard and filter by activity_name. In this example, the activity name is InsertSelect_expedited_orders… which corresponds to the INSERT INTO expedited_orders SQL statement in the following View and Optimize a Workload in Real-Time example.

    A screenshot of a filter being applied to the Query History dashboard.

    The results that are displayed reflect this query’s runtime over different time periods.

    A screenshot of the Query History dashboard filtered by expedited_orders.

View and Optimize a Workload in Real-Time

In this example, a table will be created that contains a million rows. For the sake of illustration, imagine that lookups are performed on a specific column which are each taking one second to complete. Over time, this table is projected to grow to over two million rows, where it will likely take two seconds for each lookup to complete.

While it’s suspected that adding an index on a column will increase this query’s performance, this hypothesis can be confirmed via the use of the Query History and Visual Explain features of SingleStore.

Create the Dataset

  1. In a SQL client, create the following rowstore table in the database of your choice (such as a test database) and fill it with mock order data.

    CREATE ROWSTORE TABLE if not exists expedited_orders(
    order_id int,
    order_date datetime,
    key(order_id),
    shard(order_id));
    INSERT INTO expedited_orders VALUES(1, now());
  2. Re-run the following SQL statement until 1,048,576 rows of data have been inserted into the table.

    INSERT INTO expedited_orders
    SELECT order_id+(SELECT max(order_id) FROM expedited_orders), now()
    FROM expedited_orders;

Obtain an Order Date

  1. In a SQL client, run the following query to obtain a sample order_date.

    SELECT order_date, count(*)
    FROM expedited_orders
    GROUP BY 1
    HAVING count(*) = 1;
  2. Run the following query using an order_date from the previous query result.

    Note: The remainder of these steps will use 2023-12-04 15:50:41 for the order_date. Use an order_date from your own database to ensure that this example will work.

    SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";

View Query Performance

  1. Set the events queue size, trace_events_queue_size, to 1,600,000 bytes.

    SET GLOBAL trace_events_queue_size = 1600000;
  2. Enable the Query_completion event type to look for queries that take over 250 milliseconds to run.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 250);
  3. View the Query History dashboard. After a few minutes, those queries that take over a second to run will be displayed, from the most time-intensive to the least.

    A screenshot of the Query History dashboard ordered by query runtime, descending.

    Of the queries displayed, the SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41"; is the example query to optimize as it required nearly 500 milliseconds to complete.

Visualize and Optimize the Query

  1. In a SQL client, run the following PROFILE-related queries to obtain the JSON output.

    PROFILE SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";
    SHOW PROFILE JSON;
  2. In Visual Explain, click the Paste JSON button, paste the JSON output into the provided field, and click the Show Explain button.

    Visualizing this query reveals that SingleStore performed a full table scan (indicated by TableScan) in this query. As a consequence, each row was scanned to determine which order matched the desired order_date.

    Note that the execution time for this query is 579 milliseconds, which includes the query compilation time.

    A screenshot of Visual Explain profiling the first run of SELECT * FROM expedited_orders.

To determine if adding an index will improve performance:

  1. In a SQL client, add an index on the order_date column.

    CREATE INDEX date_index ON expedited_orders (order_date);
  2. Re-run the SELECT * FROM expedited_orders query.

    SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";
  3. Re-run the PROFILE-related queries to obtain the JSON output.

    PROFILE SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";
    SHOW PROFILE JSON;
  4. Copy the JSON output to the clipboard.

  5. In Visual Explain, click the vertical dots menu in the upper right-hand corner, click the Paste JSON Explain/Profile option, paste the JSON output into the provided field, and click the Show Explain button.

    Visualizing this query reveals that SingleStore used an index range scan (indicated by IndexRangeScan) instead of a full table scan, thereby limiting the scan to only those rows that should contain the specified order_date.

    Note that the execution time for this query is down to 54 milliseconds, which also includes the query compilation time.

    A screenshot of Visual Explain profiling the second run of SELECT * FROM expedited_orders.

To see how quickly this query will run without the query compilation time:

  1. In a SQL client, re-run the SELECT * FROM expedited_orders query.

    SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";
  2. Re-run the PROFILE-related queries to obtain the JSON output.

    PROFILE SELECT * FROM expedited_orders WHERE order_date = "2023-12-04 15:50:41";
    SHOW PROFILE JSON;
  3. Copy the JSON output to the clipboard.

  4. In Visual Explain, click the vertical dots menu in the upper right-hand corner, click the Paste JSON Explain/Profile option, paste the JSON output into the provided field, and click the Show Explain button.

    Visualizing this query reveals that the execution time has been reduced to 4 milliseconds.

    A screenshot of Visual Explain profiling the final run of SELECT * FROM expedited_orders.

Review Query Performance

View the Query History dashboard. Filter by the SELECT * FROM expedited_orders query.

A screenshot of a filter being applied to the Query History dashboard.

The query’s performance history is displayed.

A screenshot of the Query History dashboard filtered by SELECT * from expedited_orders.

Note that the query duration in Query History and the query execution time in Visual Explain will differ slightly due to how the query duration is measured.

Summary

By employing both the Query History and Visual Explain features to locate and optimize the query in this example, the runtime for the SELECT * FROM expedited_orders query has been reduced:

  • From an initial 597 milliseconds for a full table scan

  • To 54 milliseconds after introducing an index, which includes the query compilation time

  • To only 4 milliseconds after the query had been compiled

For additional examples of using Visual Explain, refer to SingleStore Visual Explain and the final lab in the SingleStore Query Tuning training course.

Last modified: February 2, 2024

Was this article helpful?