Agreement between fixed observers or methods that produce readings on a continuous scale is usually evaluated via one of several intraclass correlation coefficients (ICCs). This article presents and discusses a few related issues that have not been raised before. ICCs are usually presented in the context of a two-way analysis of variance (ANOVA) model. We argue that the ANOVA model makes inadequate assumptions, such as the homogeneity of the error variances and of the pairwise correlation coefficients between observers. We then present the concept of observer relational agreement which has been used in the social sciences to derive the common ICCs without making the restrictive ANOVA assumptions. This concept did not receive much attention in the biomedical literature. When observer agreement is defined in terms of the difference of the readings of different observers on the same subject (absolute agreement), the corresponding relational agreement coefficient coincides with the concordance correlation coefficient (CCC), which is also an ICC. The CCC, which has gained popularity over the past 15 years, compares the mean squared difference between readings of observers on the same subject with the expected value of this quantity under the assumption of 'chance agreement', which is defined as independence between observers. We argue that the assumption of independence is unrealistic in this context and present a new coefficient that is not based on the concept of chance agreement.