One way in which an individual researcher can assess the citation performance of his publications is to collate normalised citation rates for each of his or her individual articles, or percentile ranges for these. For example, he or she could calculate what proportion of his publications are among the most-cited 10% in their field.
A single metric which has been devised for individual researchers to use is the h-index. The formula for this is that one's h-index is the greatest value of h for which h publications by you have each been cited at least h times.
Thus, if you have 20 publications and they have each been cited exactly 20 times, this gives you an h-index of 20.
However, if you have 10 publications and they have each been cited 100 times, this gives you an h-index of 10.
Thus, your h-index cannot be any greater than your total number of publications, however highly they have been cited.
There are objections to the use of the h-index, as those two examples illustrate. One would instinctively think of the researcher with 10 articles each cited 100 times as having a better citation record than the researcher with 20 articles each cited 20 times,. However, in terms of the h-index, the latter author is better.
Bibliometrician have therefore come up with many variants of, and complements to, the h-index, which are designed to address these flaws.