PostgreSQL | Installing and Configuring PgBadger on CentOS
Welcome to our guide on installing and configuring PgBadger on CentOS! PgBadger is a tool designed to analyze PostgreSQL log files, helping you understand database performance.
Step 1:- Prerequisite
PGbadger is written in pure Perl. To use it, you need to have Perl installed on your system. Follow the command below to install Perl
yum install -y perl perl-devel
Step 2:- Download & Install PGbadger
Download latest version from Git hub release page
wget https://github.com/darold/pgbadger/archive/refs/tags/v12.4.tar.gz
[root@localhost folder]# wget https://github.com/darold/pgbadger/archive/refs/tags/v12.4.tar.gz
--2024-04-09 21:44:15-- https://github.com/darold/pgbadger/archive/refs/tags/v12.4.tar.gz
Resolving github.com (github.com)... 20.207.73.82
Connecting to github.com (github.com)|20.207.73.82|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://codeload.github.com/darold/pgbadger/tar.gz/refs/tags/v12.4 [following]
--2024-04-09 21:44:16-- https://codeload.github.com/darold/pgbadger/tar.gz/refs/tags/v12.4
Resolving codeload.github.com (codeload.github.com)... 20.207.73.88
Connecting to codeload.github.com (codeload.github.com)|20.207.73.88|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘v12.4.tar.gz’
[ <=> ] 4,062,907 8.91MB/s in 0.4s
2024-04-09 21:44:17 (8.91 MB/s) - ‘v12.4.tar.gz’ saved [4062907]
[root@localhost folder]# ls
v12.4.tar.gz
Extract the .tar file
tar -xzvf v12.4.tar.gz
[root@localhost folder]# tar -xzvf v12.4.tar.gz
pgbadger-12.4/
pgbadger-12.4/.editorconfig
pgbadger-12.4/.gitignore
pgbadger-12.4/CONTRIBUTING.md
pgbadger-12.4/ChangeLog
pgbadger-12.4/HACKING.md
pgbadger-12.4/LICENSE
pgbadger-12.4/MANIFEST
pgbadger-12.4/META.yml
pgbadger-12.4/Makefile.PL
pgbadger-12.4/README
pgbadger-12.4/README.md
pgbadger-12.4/doc/
pgbadger-12.4/doc/pgBadger.pod
pgbadger-12.4/pgbadger
pgbadger-12.4/resources/
pgbadger-12.4/resources/.gitignore
pgbadger-12.4/resources/LICENSE
pgbadger-12.4/resources/README
pgbadger-12.4/resources/bean.js
pgbadger-12.4/resources/bootstrap.css
pgbadger-12.4/resources/bootstrap.js
pgbadger-12.4/resources/font/
pgbadger-12.4/resources/font/FontAwesome.otf
pgbadger-12.4/resources/font/fontawesome-webfont.eot
pgbadger-12.4/resources/fontawesome.css
pgbadger-12.4/resources/jqplot.barRenderer.js
pgbadger-12.4/resources/jqplot.canvasAxisTickRenderer.js
pgbadger-12.4/resources/jqplot.canvasTextRenderer.js
pgbadger-12.4/resources/jqplot.categoryAxisRenderer.js
pgbadger-12.4/resources/jqplot.cursor.js
pgbadger-12.4/resources/jqplot.dateAxisRenderer.js
pgbadger-12.4/resources/jqplot.highlighter.js
pgbadger-12.4/resources/jqplot.pieRenderer.js
pgbadger-12.4/resources/jqplot.pointLabels.js
pgbadger-12.4/resources/jquery.jqplot.css
pgbadger-12.4/resources/jquery.jqplot.js
pgbadger-12.4/resources/jquery.js
pgbadger-12.4/resources/patch-jquery.jqplot.js
pgbadger-12.4/resources/pgbadger.css
pgbadger-12.4/resources/pgbadger.js
pgbadger-12.4/resources/pgbadger_slide.js
pgbadger-12.4/resources/underscore.js
pgbadger-12.4/t/
pgbadger-12.4/t/01_lint.t
pgbadger-12.4/t/02_basics.t
pgbadger-12.4/t/03_consistency.t
pgbadger-12.4/t/04_advanced.t
pgbadger-12.4/t/exp/
pgbadger-12.4/t/exp/stmt_type.out
pgbadger-12.4/t/fixtures/
pgbadger-12.4/t/fixtures/anonymize.log
pgbadger-12.4/t/fixtures/begin_end.log
pgbadger-12.4/t/fixtures/cloudsql.log.gz
pgbadger-12.4/t/fixtures/cnpg.log.gz
pgbadger-12.4/t/fixtures/light.postgres.log.bz2
pgbadger-12.4/t/fixtures/logplex.gz
pgbadger-12.4/t/fixtures/multiline_param.log
pgbadger-12.4/t/fixtures/pg-syslog.1.bz2
pgbadger-12.4/t/fixtures/pg-syslog.2.gz
pgbadger-12.4/t/fixtures/pg-timezones.log
pgbadger-12.4/t/fixtures/pgbouncer.1.21.log.gz
pgbadger-12.4/t/fixtures/pgbouncer.log.gz
pgbadger-12.4/t/fixtures/postgresql_param_range.log
pgbadger-12.4/t/fixtures/queryid.log.gz
pgbadger-12.4/t/fixtures/rds.log.bz2
pgbadger-12.4/t/fixtures/stmt_type.log
pgbadger-12.4/t/fixtures/tempfile_only.log.gz
pgbadger-12.4/t/fixtures/weeknumber.log
pgbadger-12.4/tools/
pgbadger-12.4/tools/README.pgbadger_tools
pgbadger-12.4/tools/README.updt_embedded_rsc
pgbadger-12.4/tools/pgbadger_tools
pgbadger-12.4/tools/updt_embedded_rsc.pl
[root@localhost folder]# ll
total 3968
drwxrwxr-x. 6 root root 266 Dec 26 01:05 pgbadger-12.4
-rw-r--r--. 1 root root 4062907 Apr 9 21:44 v12.4.tar.gz
Change directory to pgbadger-12.4
cd pgbadger-12.4/
Compile Mkefile.PL
perl Makefile.PL
[root@localhost pgbadger-12.4]# perl Makefile.PL
Checking if your kit is complete...
Looks good
Writing Makefile for pgBadger
compiling and installing software from source code.
make && sudo make install
[root@localhost pgbadger-12.4]# make && sudo make install
which: no pod2markdown in (/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
Makefile:824: You must install pod2markdown to generate README.md from doc/pgBadger.pod
cp pgbadger blib/script/pgbadger
/usr/bin/perl -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/pgbadger
echo "=head1 SYNOPSIS" > doc/synopsis.pod
./pgbadger --help >> doc/synopsis.pod
echo "=head1 DESCRIPTION" >> doc/synopsis.pod
sed -i.bak 's/ +$//g' doc/synopsis.pod
rm doc/synopsis.pod.bak
sed -i.bak '/^=head1 SYNOPSIS/,/^=head1 DESCRIPTION/d' doc/pgBadger.pod
sed -i.bak '4r doc/synopsis.pod' doc/pgBadger.pod
rm doc/pgBadger.pod.bak
Manifying blib/man1/pgbadger.1p
rm doc/synopsis.pod
which: no pod2markdown in (/sbin:/bin:/usr/sbin:/usr/bin)
Makefile:824: You must install pod2markdown to generate README.md from doc/pgBadger.pod
echo "=head1 SYNOPSIS" > doc/synopsis.pod
./pgbadger --help >> doc/synopsis.pod
echo "=head1 DESCRIPTION" >> doc/synopsis.pod
sed -i.bak 's/ +$//g' doc/synopsis.pod
rm doc/synopsis.pod.bak
sed -i.bak '/^=head1 SYNOPSIS/,/^=head1 DESCRIPTION/d' doc/pgBadger.pod
sed -i.bak '4r doc/synopsis.pod' doc/pgBadger.pod
rm doc/pgBadger.pod.bak
Manifying blib/man1/pgbadger.1p
Appending installation info to /usr/lib64/perl5/perllocal.pod
rm doc/synopsis.pod
Verify the installation
pgbadger -V
[root@localhost pgbadger-12.4]# pgbadger -V
pgBadger version 12.4
If version not show then export the pgbader path follow the step
1. open .bashrc file:- vi ~/.bashrc
2. Add the line to the end of the file and save:- export PATH=/usr/local/bin:$PATH
3. Apply the changes to your current shell session:- source ~/.bashrc
Step 3:- Postgres configuration
Make changes in parameter of postgres configuration file
log_min_duration_statement = 0
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
log_rotation_age = 1d
log_rotation_size = 10MB
log_statement = 'all'
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0
log_error_verbosity = default
lc_messages='en_US.UTF-8'
lc_messages='C'
Restart the Postgres service
-bash-4.2$ /usr/pgsql-15/bin/pg_ctl -D /pgdata/15/data restart
waiting for server to shut down.... done
server stopped
waiting for server to start....2024-04-09 22:30:26 IST [14521]: user=,db=,app=,client= LOG: redirecting log output to logging collector process
2024-04-09 22:30:26 IST [14521]: user=,db=,app=,client= HINT: Future log output will appear in directory "log".
done
server started
Step 4:- Generate Report
Run the following command
Syntax:- pgbadger -f stderr [logfile path] -O [report path] -o [report file name]
pgbadger -f stderr /pgdata/15/data/log/postgresql-* -O /var/www/pgbadger -o pgbadger2.html
[root@localhost pgbadger-12.4]# pgbadger -f stderr /pgdata/15/data/log/postgresql-* -O /var/www/pgbadger -o pgbadger2.html
LOG: Ok, generating html report... 37110 bytes of 37110 (100.00%), queries: 42, events: 26
Step 5:- Setup cronjob
Below cronjob run the command every day at 9AM and generate report, the file name has date and timestamp. you can schedule time as per your requirement. take help of this crontab to schedule task
[root@localhost pgbadger]# which pgbadger
/usr/local/bin/pgbadger
[root@localhost pgbadger]# crontab -l
* 9 * * * /usr/local/bin/pgbadger -f stderr /pgdata/15/data/log/postgresql-* -O /var/www/pgbadger -o "/var/www/pgbadger/pgbadger_report_$(date +'\%Y-\%m-\%d_\%H-\%M-\%S').html" >> /var/www/pgbadger/cron_output.log 2>&1
Use help command to get complete list of options
[root@localhost pgbadger]# pgbadger --help
Usage: pgbadger [options] logfile [...]
PostgreSQL log analyzer with fully detailed reports and graphs.
Arguments:
logfile can be a single log file, a list of files, or a shell command
returning a list of files. If you want to pass log content from stdin
use - as filename. Note that input from stdin will not work with csvlog.
Options:
-a | --average minutes : number of minutes to build the average graphs of
queries and connections. Default 5 minutes.
-A | --histo-average min: number of minutes to build the histogram graphs
of queries. Default 60 minutes.
-b | --begin datetime : start date/time for the data to be parsed in log
(either a timestamp or a time)
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
-D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log
(either a timestamp or a time)
-E | --explode : explode the main report by generating one report
per database. Global information not related to a
database is added to the postgres database report.
-f | --format logtype : possible values: syslog, syslog2, stderr, jsonlog,
csv, pgbouncer, logplex, rds and redshift. Use this
option when pgBadger is not able to detect the log
format.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-H | --html-outdir path: path to directory where HTML report must be written
in incremental mode, binary files stay on directory
defined with -O, --outdir option.
-i | --ident name : programname used as syslog ident. Default: postgres
-I | --incremental : use incremental mode, reports will be generated by
days in a separate directory, --outdir must be set.
-j | --jobs number : number of jobs to run at same time for a single log
file. Run as single by default or when working with
csvlog format.
-J | --Jobs number : number of log files to parse in parallel. Process
one file at a time by default.
-l | --last-parsed file: allow incremental log parsing by registering the
last datetime and line parsed. Useful if you want
to watch errors since last run or if you want one
report per day with a log rotated each week.
-L | --logfile-list file:file containing a list of log files to parse.
-m | --maxlength size : maximum length of a query, it will be restricted to
the given size. Default truncate size is 100000.
-M | --no-multiline : do not collect multiline statements to avoid garbage
especially on errors that generate a huge report.
-N | --appname name : only report on entries for given application name
-o | --outfile filename: define the filename for the output. Default depends
on the output format: out.html, out.txt, out.bin,
or out.json. This option can be used multiple times
to output several formats. To use json output, the
Perl module JSON::XS must be installed, to dump
output to stdout, use - as filename.
-O | --outdir path : directory where out files must be saved.
-p | --prefix string : the value of your custom log_line_prefix as
defined in your postgresql.conf. Only use it if you
aren't using one of the standard prefixes specified
in the pgBadger documentation, such as if your
prefix includes additional variables like client IP
or application name. MUST contain escape sequences
for time (%t, %m or %n) and processes (%p or %c).
See examples below.
-P | --no-prettify : disable SQL queries prettify formatter.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-Q | --query-numbering : add numbering of queries to the output when using
options --dump-all-queries or --normalized-only.
-r | --remote-host ip : set the host where to execute the cat command on
remote log file to parse the file locally.
-R | --retention N : number of weeks to keep in incremental mode. Defaults
to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directories are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-S | --select-only : only report SELECT queries.
-t | --top number : number of queries to store/display. Default: 20.
-T | --title string : change title of the HTML page report.
-u | --dbuser username : only report on entries for the given user.
-U | --exclude-user username : exclude entries for the specified user from
report. Can be used multiple time.
-v | --verbose : enable verbose or debug mode. Disabled by default.
-V | --version : show pgBadger version and exit.
-w | --watch-mode : only report errors just like logwatch could do.
-W | --wide-char : encode html output of queries into UTF8 to avoid
Perl message "Wide character in print".
-x | --extension : output format. Values: text, html, bin or json.
Default: html
-X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat, bzcat or unzip is not in your path.
-Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
Use this to adjust date/time in JavaScript graphs.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-file filename: path of the file that contains each regex to use
to exclude queries from the report. One regex per
line.
--include-query regex : any query that does not match the given regex will
be excluded from the report. You can use this
option multiple times. For example: "(tbl1|tbl2)".
--include-file filename: path of the file that contains each regex to the
queries to include from the report. One regex per
line.
--disable-error : do not generate error report.
--disable-hourly : do not generate hourly report.
--disable-type : do not generate report of queries by type, database
or user.
--disable-query : do not generate query reports (slowest, most
frequent, queries by users, by database, ...).
--disable-session : do not generate session report.
--disable-connection : do not generate connection report.
--disable-lock : do not generate lock report.
--disable-temporary : do not generate temporary report.
--disable-checkpoint : do not generate checkpoint/restartpoint report.
--disable-autovacuum : do not generate autovacuum report.
--charset : used to set the HTML charset to be used.
Default: utf-8.
--csv-separator : used to set the CSV field separator, default: ,
--exclude-time regex : any timestamp matching the given regex will be
excluded from the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--include-time regex : only timestamps matching the given regex will be
included in the report. Example: "2013-04-12 .*"
You can use this option multiple times.
--exclude-db name : exclude entries for the specified database from
report. Example: "pg_dump". Can be used multiple
times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump". Can be used
multiple times.
--exclude-line regex : exclude any log entry that will match the given
regex. Can be used multiple times.
--exclude-client name : exclude log entries for the specified client ip.
Can be used multiple times.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
--noreport : no reports will be created in incremental mode.
--log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add an md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir path : set the path where the pid file must be stored.
Default /tmp
--pid-file file : set the name of the pid file to manage concurrent
execution of pgBadger. Default: pgbadger.pid
--rebuild : used to rebuild all html reports in incremental
output directories where there's binary data files.
--pgbouncer-only : only show PgBouncer-related menus in the header.
--start-monday : in incremental mode, calendar weeks start on
Sunday. Use this option to start on a Monday.
--iso-week-number : in incremental mode, calendar weeks start on
Monday and respect the ISO 8601 week number, range
01 to 53, where week 1 is the first week that has
at least 4 days in the new year.
--normalized-only : only dump all normalized queries to out.txt
--log-timezone +/-XX : Set the number of hours from GMT of the timezone
that must be used to adjust date/time read from
log file before beeing parsed. Using this option
makes log search with a date/time more difficult.
The value can be an integer, ex.: 2, or a float,
ex.: 2.5.
--prettify-json : use it if you want json output to be prettified.
--month-report YYYY-MM : create a cumulative HTML report over the specified
month. Requires incremental output directories and
the presence of all necessary binary data files
--day-report YYYY-MM-DD: create an HTML report over the specified day.
Requires incremental output directories and the
presence of all necessary binary data files
--noexplain : do not process lines generated by auto_explain.
--command CMD : command to execute to retrieve log entries on
stdin. pgBadger will open a pipe to the command
and parse log entries generated by the command.
--no-week : inform pgbadger to not build weekly reports in
incremental mode. Useful if it takes too much time.
--explain-url URL : use it to override the url of the graphical explain
tool. Default: https://explain.depesz.com/
--tempdir DIR : set directory where temporary files will be written
Default: File::Spec->tmpdir() || '/tmp'
--no-process-info : disable changing process title to help identify
pgbadger process, some system do not support it.
--dump-all-queries : dump all queries found in the log file replacing
bind parameters included in the queries at their
respective placeholders positions.
--keep-comments : do not remove comments from normalized queries. It
can be useful if you want to distinguish between
same normalized queries.
--no-progressbar : disable progressbar.
--dump-raw-csv : parse the log and dump the information into CSV
format. No further processing is done, no report.
--include-pid PID : only report events related to the session pid (%p).
Can be used multiple time.
--include-session ID : only report events related to the session id (%c).
Can be used multiple time.
pgBadger is able to parse a remote log file using a passwordless ssh connection.
Use -r or --remote-host to set the host IP address or hostname. There are also
some additional options to fully control the ssh connection.
--ssh-program ssh path to the ssh program to use. Default: ssh.
--ssh-port port ssh port to use for the connection. Default: 22.
--ssh-user username connection login name. Defaults to running user.
--ssh-identity file path to the identity file to use.
--ssh-timeout second timeout to ssh connection failure. Default: 10 sec.
--ssh-option options list of -o options to use for the ssh connection.
Options always used:
-o ConnectTimeout=$ssh_timeout
-o PreferredAuthentications=hostbased,publickey
Log file to parse can also be specified using an URI, supported protocols are
http[s] and [s]ftp. The curl command will be used to download the file, and the
file will be parsed during download. The ssh protocol is also supported and will
use the ssh command like with the remote host use. See examples bellow.
Return codes:
0: on success
1: die on error
2: if it has been interrupted using ctr+c for example
3: the pid file already exists or can not be created
4: no log file was given at command line
Examples:
pgbadger /var/log/postgresql.log
pgbadger /var/log/postgres.log.2.gz /var/log/postgres.log.1.gz /var/log/postgres.log
pgbadger /var/log/postgresql/postgresql-2012-05-*
pgbadger --exclude-query="^(COPY|COMMIT)" /var/log/postgresql.log
pgbadger -b "2012-06-25 10:56:11" -e "2012-06-25 10:59:11" /var/log/postgresql.log
cat /var/log/postgres.log | pgbadger -
# Log line prefix with stderr log output
pgbadger --prefix '%t [%p]: user=%u,db=%d,client=%h' /pglog/postgresql-2012-08-21*
pgbadger --prefix '%m %u@%d %p %r %a : ' /pglog/postgresql.log
# Log line prefix with syslog log output
pgbadger --prefix 'user=%u,db=%d,client=%h,appname=%a' /pglog/postgresql-2012-08-21*
# Use my 8 CPUs to parse my 10GB file faster, much faster
pgbadger -j 8 /pglog/postgresql-10.1-main.log
Use URI notation for remote log file:
pgbadger http://172.12.110.1//var/log/postgresql/postgresql-10.1-main.log
pgbadger ftp://username@172.12.110.14/postgresql-10.1-main.log
pgbadger ssh://username@172.12.110.14:2222//var/log/postgresql/postgresql-10.1-main.log*
You can use together a local PostgreSQL log and a remote pgbouncer log file to parse:
pgbadger /var/log/postgresql/postgresql-10.1-main.log ssh://username@172.12.110.14/pgbouncer.log
Reporting errors every week by cron job:
30 23 * * 1 /usr/bin/pgbadger -q -w /var/log/postgresql.log -o /var/reports/pg_errors.html
Generate report every week using incremental behavior:
0 4 * * 1 /usr/bin/pgbadger -q `find /var/log/ -mtime -7 -name "postgresql.log*"` -o /var/reports/pg_errors-`date +\%F`.html -l /var/reports/pgbadger_incremental_file.dat
This supposes that your log file and HTML report are also rotated every week.
Or better, use the auto-generated incremental reports:
0 4 * * * /usr/bin/pgbadger -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
will generate a report per day and per week.
In incremental mode, you can also specify the number of weeks to keep in the
reports:
/usr/bin/pgbadger --retention 2 -I -q /var/log/postgresql/postgresql.log.1 -O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can
use pgBadger as follow to exclude these periods from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
This will help avoid having COPY statements, as generated by pg_dump, on top of
the list of slowest queries. You can also use --exclude-appname "pg_dump" to
solve this problem in a simpler way.
You can also parse journalctl output just as if it was a log file:
pgbadger --journalctl 'journalctl -u postgresql-9.5'
or worst, call it from a remote host:
pgbadger -r 192.168.1.159 --journalctl 'journalctl -u postgresql-9.5'
you don't need to specify any log file at command line, but if you have other
PostgreSQL log files to parse, you can add them as usual.
To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
it will also update all resource files (JS and CSS). Use -E or --explode
if the reports were built using this option.
pgBadger also supports Heroku PostgreSQL logs using logplex format:
heroku logs -p postgres | pgbadger -f logplex -o heroku.html -
this will stream Heroku PostgreSQL log to pgbadger through stdin.
pgBadger can auto detect RDS and cloudwatch PostgreSQL logs using
rds format:
pgbadger -f rds -o rds_out.html rds.log
Each CloudSQL Postgresql log is a fairly normal PostgreSQL log, but encapsulated
in JSON format. It is autodetected by pgBadger but in case you need to force
the log format use `jsonlog`:
pgbadger -f jsonlog -o cloudsql_out.html cloudsql.log
This is the same as with the jsonlog extension, the json format is different
but pgBadger can parse both formats.
pgBadger also supports logs produced by CloudNativePG Postgres operator for Kubernetes:
pgbadger -f jsonlog -o cnpg_out.html cnpg.log
To create a cumulative report over a month use command:
pgbadger --month-report 2919-05 /path/to/incremental/reports/
this will add a link to the month name into the calendar view in
incremental reports to look at report for month 2019 May.
Use -E or --explode if the reports were built using this option.
Link of the Report generated :- pgbadger report
With PgBadger installed, you can now efficiently analyze PostgreSQL logs, gain insights into performance, and optimize your database for better efficiency.