Query errors in server access mode when querying more than 1hour

Hi, I’m in trouble when querying more than 1 hour.

=== There are three types of error ===
#1. TypeError: Cannot read properties of undefined (reading ‘0’)
#2. bad_data: exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)
#3. timeout awaiting response headers

#1, #2 occurs when querying 1~24hours
#3 occurs when querying 7days

But all errors disappear when I change access mode from “Server Access” to “Browser Access”.
This means there is no problem when access directly from browser to prometheus.
So I suspect there is a problem between “Grafana server” and “Prometheus server”.
But there is no problem when querying under 1hour.

And all is solved when I use browser mode. But I heard this access mode will be removed in future. So I wonder when “Browser access mode” is removed and How can I fix this problem in “Server access mode”?

=== Server info. ===

  1. Grafana
  1. version : Open Source vv1.0 (1)
  2. port : 3000
  1. Prometheus
  1. version : 2.8.0
  2. port : 9090
  1. Server(VM)
  1. OS : ubuntu 20.04 LTS
  2. HW : 64vCPU_RAM256GB_Disk50GB

@jihyeonkim001 yes, browser access is removed in version 9.0

The error exceeded maximum resolution of 11,000 points per timeseries. means that you’ve hit a hard limit on the number of data points that can be queried from prometheus. It’s a hardcoded limit in prometheus, so shouldn’t be dependent on the connection method.

The issue below in the prometheus repository has more information about the datapoint limit and potential workarounds:

    // For safety, limit the number of returned points per timeseries.
	// This is sufficient for 60s resolution for a week or
    // 1h resolution for a year.
	if end.Sub(start)/step > 11000 {
		err := errors.New("exceeded maximum resolution of 11,000 points per timeseries. Try decreasing the query resolution (?step=XX)")
		return nil, &apiError{errorBadData, err}