![]() |
|
Home · Charts · Time Control |
Performance Co-Pilot (PCP) is an open source framework and toolkit for monitoring, analyzing, and responding to details of live and historical system performance. PCP has a fully distributed, plug-in based architecture making it particularly well suited to centralized analysis of complex environments and systems. Custom performance metrics can be added using the C, C++, Perl, and Python interfaces.
This page provides quick instructions how to install and use PCP on a set of hosts of which one (a monitor host) will be used for monitoring and analyzing itself and other hosts (collector hosts).
PCP is available on all recent Linux distribution releases, including Debian/Fedora/RHEL/SUSE/Ubuntu. For other operating systems and distributions you might want to consider installation from sources.
![]() # yum install pcp # or apt-get or dnf or zypper # systemctl enable pmcd # systemctl start pmcd # systemctl enable pmlogger # systemctl start pmlogger |
Here we enable the Performance Metrics Collector Daemon (pmcd(1)) on the host which then in turn will control and request metrics on behalf of clients from various Performance Metrics Domain Agents (PMDAs). The PMDAs provide the actual data from different components (domains) in the system, for example from the Linux Kernel PMDA or the NFS Client PMDA. The default configuration includes over 1000 metrics with negligible overall overhead when queried. If no queries for metrics are sent to the agent, it doesn't do anything at all. Local PCP archive logs will also be enabled on the host for convenience with pmlogger(1).
![]() # cd /var/lib/pcp/pmdas/postgresql # ./Install |
The client tools will contact local or remote PMCDs as needed, communication with PMCD over the network uses TCP port 44321 by default.
The following additional packages can be optionally installed on the monitoring host to extend the set of monitoring tools from the base pcp package.
![]() # yum install pcp-doc pcp-gui pcp-system-tools # or apt-get or dnf or zypper |
To enable centralized archive log collection on the monitoring host, its pmlogger is configured to fetch performance metrics from collector hosts. Add each collector host to the pmlogger configuration file /etc/pcp/pmlogger/control and then restart the pmlogger service on the monitoring host.
![]() # echo acme.com n n PCP_LOG_DIR/pmlogger/acme.com -r -T24h10m -c config.acme.com >> /etc/pcp/pmlogger/control # systemctl restart pmlogger |
The health of the remote log collector will be done every half an hour. You can also run /usr/libexec/pcp/bin/pmlogger_check -V -C (on Fedora/RHEL) or /usr/lib/pcp/bin/pmlogger_check -V -C (on Debian/Ubuntu) manually to do a health check.
Note that a default configuration file (config.acme.com above) will be generated if it does not exist already. This process is optional (a custom configuration for each host can be provided instead), see the pmlogconf(1) manual page for details on this.
In dynamic environments manually configuring every host is not feasible, perhaps even impossible. PCP Manager (pmmgr(1), from the pcp-manager package) can be used instead of directly invoking pmlogger to auto-discover and auto-configure new collector hosts.
![]() # yum install pcp-manager # or apt-get or dnf or zypper # systemctl enable pmmgr # echo acme.com >> /etc/pcp/pmmgr/target-host # echo avahi >> /etc/pcp/pmmgr/target-discovery # echo probe=ip.addr.tup.le/netmask >> /etc/pcp/pmmgr/target-discovery # systemctl restart pmmgr # find /var/log/pcp/pmmgr |
![]() $ pmfind -s pmcd |
Basic health check for running services, network connectivity between hosts, and enabled PMDAs can be done simply as follows.
![]() $ pcp -h munch Performance Co-Pilot configuration on munch: platform: SunOS munch 5.11 oi_151a8 i86pc hardware: 4 cpus, 3 disks, 4087MB RAM timezone: EST-10 services: pmcd pmproxy pmcd: Version 3.8.9-1, 3 agents pmda: pmcd mmv solaris pmie: /var/log/pcp/pmie/munch/pmie.log $ pcp -a /var/log/pcp/pmlogger/smash/20140729 Performance Co-Pilot configuration on smash: archive: /var/log/pcp/pmlogger/smash/20140729 platform: Linux smash 2.6.32-279.46.1.el6.x86_64 #1 SMP Mon May 19 16:16:00 EDT 2014 x86_64 hardware: 8 cpus, 2 disks, 1 node, 23960MB RAM timezone: EST-10 services: pmcd pmproxy pmwebd pmcd: Version 3.9.8-1, 8 agents pmda: pmcd proc xfs linux mmv nvidia dmcache postgresql pmlogger: primary logger: /var/log/pcp/pmlogger/smash/20140729.00.10 pmie: /var/log/pcp/pmie/smash/pmie.log |
PCP comes with a wide range of command line utilities for accessing live performance metrics via PMCDs or historical data using archive logs. The following examples illustrate some of the most useful use cases, please see the corresponding manual pages for each command for additional information. In the examples below -h <host> could be used to query a remote host, the default is the local host. Shell completion support for Bash and especially for Zsh allows completing available metrics, metricsets (with pmrep), and available command line options.
![]() $ pminfo -t |
![]() $ pminfo -dfmtT disk.partitions.read |
![]() $ pmval -t 2sec -f 3 disk.partitions.write |
![]() $ pmdumptext -Xlimu -t 2sec 'kernel.all.load[1]' mem.util.used disk.partitions.write -h acme.com |
![]() $ pmrep -p -b GB -t 2sec -o csv kernel.all.sysfork mem.util.free mem.util.used |
![]() $ pcp atop |
![]() $ pcp atopsar |
![]() $ pmstat -t 2sec -h acme1.com -h acme2.com |
![]() $ pmiostat -t 2sec |
![]() $ pmchart -t 2sec -h acme1.com -h acme2.com |
PCP archive logs are located under /var/log/pcp/pmlogger/hostname, and the archive names indicate the time they cover. Archives are self-contained, and machine- and version-independent so they can be transfered to any machine for offline analysis.
![]() $ pmdumplog -L acme.com/20140902 |
![]() $ pcp -a acme.com/20140902 |
![]() $ pminfo -a acme.com/20140902 |
![]() $ pminfo -df mem.freemem -a acme.com/20140902 |
![]() $ pmval -f 3 disk.partitions.write -a acme.com/20140902 |
![]() $ pmval -d -t 2sec -f 3 disk.partitions.write -S @09:00 -T @10:00 -a acme.com/20140902 |
![]() $ pmlogsummary -HlfiImM -S @09:00 -T @10:00 acme.com/20140902 disk.partitions.write mem.freemem |
![]() $ pmdumptext -Xlimu -t 10m -S @09:00 -T @10:00 'kernel.all.load[1]' 'mem.util.used' 'disk.partitions.write' -a acme.com/20140902 |
![]() $ pmrep -a acme.com/20140902 -A 5min -t 5min -Z UTC :vmstat |
![]() $ pmdiff -S @02:00 -T @03:00 -B @09:00 -E @10:00 acme.com/20140902 acme.com/20140901 |
![]() $ pcp atop -b 09:00 -r acme.com/20140902 $ pcp -S @09:00 -a acme.com/20140902 atop |
![]() $ pmstat -t 10m -S @09:00 -T @10:00 -a acme.com/20140902 |
![]() $ pmiostat -t 1h -a acme.com/20140902 |
![]() $ pcp -a acme.com/20140902 -O @10:02 free |
![]() $ pmchart -t 2sec -S @09:00 -T @10:00 -a acme.com/20140902 |
![]() $ pmlogextract <archive1> <archive2> <newarchive> |
iostat and sar data can be imported as PCP archives which then allows inspecting and visualizing the data with PCP tools. The iostat2pcp(1) importer is in the pcp-import-iostat2pcp package and the sar2pcp(1) importer is in the pcp-import-sar2pcp package.
![]() $ iostat -t -x 2 > iostat.out $ iostat2pcp iostat.out iostat.pcp $ pmchart -t 2sec -a iostat.pcp |
![]() $ sar2pcp /var/log/sa/sa15 sar.pcp $ pmchart -t 2sec -a sar.pcp |
PCP provides details of each running process via the standard PCP interfaces and tools on the localhost but due to security and performance considerations, most of the process related information is not stored in archive logs by default. Also for security reasons, only root can access some details of running processes of other users.
Custom application instrumentation is possible with the Memory Mapped Value (MMV) PMDA.
![]() $ pminfo proc |
![]() $ pmval -t 2sec 'proc.fd.count[1234]' |
![]() $ pmdumptext -Xlimu -t 2sec 'proc.psinfo.utime[1234]' 'proc.memory.rss[1234]' 'proc.psinfo.threads[1234]' |
![]() $ pmrep -i wlan0 -v network.interface.out |
![]() $ pminfo proc -a acme.com/20140902 |
![]() $ pmval -s 1 -S @"2014-08-20 14:00" proc.nprocs -a acme.com/20140820 |
It is also possible to monitor “hot” or “interesting” processes by name, for example all processes of which command name is java or python. This monitoring of “hot” processes can also be enabled or disabled based on certain criterias or from the command line on the fly. The metrics will be available under the namespace hotproc.
Configuring processes to be monitored constantly using the hotproc namespace can be done using the configuration file /var/lib/pcp/pmdas/proc/hotproc.conf - see the pmdaproc(1) manual page for details. This allows monitoring these processes regardless of their PIDs and also logging the metrics easily.
![]() # pmstore hotproc.control.config 'fname == "java"' # pminfo -f hotproc |
Applications can be instrumented in the PCP world by using Memory Mapped Values (MMVs). pmdammv is a PMDA which exports application level performance metrics using memory mapped files. It offers an extremely low overhead instrumentation facility that is well-suited to long running, mission critical applications where it is desirable to have performance metrics and availability information permanently enabled.
Application to be instrumented with MMV need to be PCP MMV aware, APIs are available for several languages including C, C++, Perl, and Python. Java applications may use the separate Parfait class library for enabling MMV.
See the Performance Co-Pilot Programmer's Guide PDF for more information about application instrumentation.
PCP provides a wide range of performance metrics but still in some cases the readily available metrics may not exactly provide what is needed. Derived metrics (see pmLoadDerivedConfig(3)) may be used to extend the available metrics with new (derived) metrics by using simple arithmetic expressions (see pmRegisterDerived(3)).
The following example illustrates how to define corresponding metrics which are displayed by sar -d but are not provided by default by PCP:
![]() $ cat ./pcp-deriv-metrics.conf disk.dev.avqsz = disk.dev.read_rawactive + disk.dev.write_rawactive disk.dev.avrqsz = 2 * rate(disk.dev.total_bytes) / rate(disk.dev.total) disk.dev.await = 1000 * (rate(disk.dev.read_rawactive) + rate(disk.dev.write_rawactive)) / rate(disk.dev.total) $ export PCP_DERIVED_CONFIG=./pcp-deriv-metrics.conf $ pmval -t 2sec -f 3 disk.dev.avqsz $ pmval -t 2sec -f 3 disk.dev.avrqsz -h acme.com $ pmval -t 2sec -f 3 disk.dev.await -a acme.com/20140902 |
![]() $ pmrep -t 2sec -p -b MB -e "mem.util.allcache = mem.util.bufmem + mem.util.cached + mem.util.slab" mem.util.free mem.util.allcache mem.util.used |
Performance Metrics Inference Engine (pmie(1)) can evaluate rules and generate alarms, run scripts, or automate system management tasks based on live or past performance metrics.
![]() # systemctl enable pmie # systemctl start pmie |
To enable the monitoring host to run PMIE for collector hosts, add each host to the /etc/pcp/pmie/control configuration file.
![]() # echo acme.com n PCP_LOG_DIR/pmie/acme.com -c config.acme.com # systemctl restart pmie |
Some examples in plain English describing what could be done with PMIE:
![]() $ cat pmie.ex bloated = ( mem.util.used > 5 Gbyte ) -> print "%v memory used on %h!" $ pmie -C pmie.ex $ pmie -t 1min -c pmie.ex -S @09:00 -T @10:00 -a acme.com/20140820 |
Performance Metrics Web Daemon (pmwebd(1)) is a front-end to both PMCD and PCP archives, providing a REST web service (over HTTP/JSON) suitable for use by web-based tools wishing to access performance data over HTTP. Custom applications can access all the available PCP information using this method, including custom metrics generated by custom PMDAs.
![]() # yum install pcp-webapi # or apt-get or dnf or zypper # systemctl enable pmwebd # systemctl start pmwebd |
Several browser interfaces for accessing PCP performance metrics are also available. These web interfaces make PCP metrics available via your choice of Grafana or Graphite.
After installing the PCP web services daemon as described above, install the pcp-webjs package and then just point a browser toward http://localhost:44323.
PCP PMDAs offer a way for administrators and developers to customize and extend the default PCP installation. The pcp-libs-devel package contains all the needed development related examples, headers, and libraries. New PMDAs can easily be added, below is a quick list of references for starting development:
Copyright © 2007-2010 Aconex |