Quality Insights:
2011 Patient Experiences in Primary Care

Technical Appendix

Five of MHQP’s member health plansBlue Cross Blue Shield of Massachusetts, Fallon Community Health Plan, Harvard Pilgrim Health Care, Health New England, and Tufts Health Planparticipated in the 2011 statewide patient experience survey. Participating health plans have worked in partnership with MHQP to bring consistency to this area of measurement and to make data about this important aspect of quality available to both physicians and consumers.

The statewide survey was conducted in the spring of 2011 and included patients sampled from all adult and pediatric practice sites with at least three physicians in MHQP’s Massachusetts Provider Database. The survey asked patients to report about their experiences with a particular named primary care physician and with that physician's practice. Sample sizes for the statewide initiative were designed to afford information at the practice site level - not at the individual physician level.

Survey Instrument

The project fielded a 55-question survey instrument comprised of the best performing items from two validated surveys:

  • The Ambulatory Care Experiences Survey (ACES) developed as part of an MHQP demonstration project in partnership with researchers from The Health Institute at Tufts-New England Medical Center; and
  • The Clinician/Group CAHPS® Survey which was developed under the auspices of the Agency for Health Care Research and Quality and has been endorsed by the National Quality Forum.

The ACES tool has figured prominently in development of the Clinician/Group CAHPS Survey, and thus, there is extensive overlap between the surveys. ACES has also been used extensively on numerous large-scale survey initiatives in varied markets nationwide. Core items in ACES have emerged as the national standard for measuring patient experiences with their primary care physicians.

The 2011 MHQP Patient Experience instrument covers a number of domains characterizing patients' experiences with their primary care physicians, including:

Quality of Doctor-Patient Interactions

  • Communication (How well doctors communicate with patients)
  • Integration of care (How well doctors coordinate care)
  • Knowledge of the patient (How well doctors know their patients)
  • Health promotion (How well doctors give preventive care and advice)

Organizational Features of Care

  • Organizational access (Getting timely appointments, care, and information)
  • Visit-based continuity (Seeing your own doctor)
  • Office staff (Getting quality care from staff in the doctor's office)

Adult and child versions of the survey instrument were administered in the 2011 statewide survey. The adult survey was designed to be completed by the adult patient of the named physician. The child survey was designed to be completed by the parent or guardian of the child patient of the named physician.

The survey questions were developed and validated over a period of several years, and build upon work conducted over a 15-year period by a team of internationally recognized survey scientists in the health care field. The survey's conceptual model for measuring primary care corresponds to the Institute of Medicine definition of primary care (1996). Each survey question has undergone cognitive testing to ensure that the wording is interpreted consistently and is clear to individuals across a wide continuum of English literacy skills. All survey questions and composite measures have undergone extensive psychometric testing to ensure the reliability, validity and data quality.

Survey Administration

The survey was fielded in two rounds using both mail and Internet modes for response. The initial mailing package included:

  • A cover letter to the patient explaining the survey and its importance;
  • The web address for the patient to access the survey on the internet; and
  • A paper copy of the survey should the patient not have internet access or simply prefer to complete a paper survey.

Non-respondents were sent a second survey package, identical to the first, 3-4 weeks after the initial mailing.

All survey materials had the patient's health plan name and logo at the top of the materials and a health plan official's signature on the cover letter.

Physician and Practice Site Inclusion Criteria

The following specifications were used to determine the inclusion of physicians and practice sites in the survey.

For physicians:

  • Primary specialty designation of Internal Medicine, Pediatrics, Family Medicine, or General Medicine; and
  • Panel size of at least 50 eligible patients across the five participating health plans.

For practice sites:

  • At least three eligible physicians meeting the criteria above. Each physician was classified as either "adult" or "child", based on the age of the majority of his or her patients in the sample pool (child=ages 0-17; adult=ages 18 and older).
  • Practice sites were classified as "adult" if they had three or more physicians, each with 50 or more eligible adult patients. Practice sites were classified as "child" if they had three or more physicians, each with 50 or more eligible child patients. Practice sites were classified as "mixed" if they met both sets of criteria (adult and child practice site).
  • Based on the number of adult and pediatric physicians within each practice site, the composition of the survey sample(s) was drawn using the following criteria (applied in the order listed):
    • If a practice site was classified as "mixed" then separate survey samples of adults and children were drawn.
    • If a practice site was either "adult" or "child" (but not mixed), a single survey sample was drawn consisting of adult or child.

Practice site groupings are based on where a physician was practicing as of December 31, 2010. Changes in practice site composition after this date are not reflected in the 2011 MHQP survey.

Patient Sample Selection

The adult and child patients surveyed for each site were randomly drawn based on visit and membership data from five participating health plans. To be eligible for the survey, patients had to meet the following criteria:

  • Current enrollment in the health plan;
  • Commercial member in an HMO, POS, or PPO health plan product;
  • Age 18 and older to receive an adult survey;
  • Age 17 or younger to receive a pediatric survey; and
  • Patients of Massachusetts primary care physicians.

MHQP used both visit data and health plan membership data to link patients to their primary care physicians. The methodology considered whether primary care services were received by the member and how often and recently the physician was seen. Once patients had been assigned to physicians, the information was used to sample the required number of patients per physician from each practice site.

To ensure that only active patients of a physician were included in analysis and data reports, the survey instrument included some initial questions that served to confirm the following.

  • The patient considered the physician named on the survey to be their primary physician (adult survey) or their child's primary physician (child survey); and
  • The patient had at least one visit with that physician in the previous 12 months.

Responses of patients who reported that the named physician was not their (or their child's) primary physician and/or reported having no visits with that physician in the past 12 months were not included in the analysis completed for this report.

Sampling Protocol

MHQP utilized a "variable" sampling strategy based on the size of the practice site being surveyed. The targeted number of completed surveys and initial sample sizes are provided in the table below:

Number of MDs per site

Target number of completes per site

Starting sample (assuming 35% response rate)



















The rationale for this approach comes from findings in the 2002-03 MHQP demonstration project. That project and related work have demonstrated that the individual physician is a major source of variation in the patient survey measures. The individual physician is a larger source of variation than the practice site for most measures and larger than the medical group for all measures. Thus, the number of patients required to obtain reliable and stable information about a practice site varies in accordance with the number of physicians at that site. The sampling approach used for the statewide survey takes this into account, employing larger site-level samples for practices with more physicians. (The table above indicates the number of completed surveys targeted for practice sites of varying sizes, and the size of the starting sample drawn in an effort to obtain that target sample size.) At each practice site, starting samples were drawn by randomly sampling an equal number from among each physician's eligible patients.

Site-Level Reliability

All survey questions and summary measures underwent extensive psychometric testing. One key criterion by which all survey measures were evaluated is their site-level reliability. Site-level reliability is a metric that indicates how accurately a survey measure captures information about a particular practice site. Specifically, the site-level reliability coefficient indicates the extent to which patients of a given practice site report similarly about their experiences with that practice. Reliability scores range from 0 to 1.0 where:

  • 1.0 signifies a measure for which every patient of the site reports an experience identical to every other patient in the practice; and
  • 0.0 signifies a measure for which there is no consistency or commonality of experiences reported by patients of a given practice.

Any results for which the sample was too small to achieve a reliability threshold of 0.70 are not included in this report.

Performance Categories

For the majority of the summary measures represented in the report, physician practice performance is displayed in percentile terms relative to all practices statewide. There are 4 performance categories as follows:

  1. Above the 85th percentile (denoted by four stars: ),
  2. Above the 50th percentile but below the 85th (),
  3. Above the 15th percentile but below the 50th (); and
  4. Below the 15th percentile ().

For certain measures, cutpoints are not drawn at the 15th, 50th, and 85th percentiles because statewide performance was consistently high. For example, 95% of adult and pediatric practices statewide achieved performance at or above 90 points on Communication. Similarly, statewide performance for the Knowledge of Patient for pediatric practices was very high. Eighty-one percent of pediatric practices statewide scored at 90 points or above for the Knowledge of Patient. Therefore, cutpoints for these measures are based on absolute thresholds (80, 90, and 95 points, respectively) rather than percentiles.

Risk of Misclassifying Results

The performance reporting approach is specifically designed to minimize the risk of misclassification. This is done by using only a small number of performance categories, and by defining a buffer zone around each performance cutpoint to give added assurance. Using this reporting methodology ensures that the risk of misclassification averages no more than two percent across all survey measures.

About the Willingness to Recommend Rating

People often ask family and friends for advice when they are looking for a physician. In the MHQP survey patients are asked how likely they would be to recommend their physician and results are reported as an overall rating item “Willingness to Recommend”. This survey item is a rating question and does not give specific information about patients’ experiences with care; therefore MHQP does not rank performance and denote level of practice performance with stars. Instead, the response frequencies on a five-point scale (Definitely yes, Probably yes, Not sure, Probably not, Definitely not) are provided. MHQP recommends that “Willingness to Recommend” results be used as only one factor, along with the quality of care measures about patient experience reported on this website, when assessing a practice site’s performance.

Response Rate

The overall response rate to the survey was 34%. Extensive previous analyses have been conducted to determine whether non-response poses a threat to the data integrity for site-level results. Non-response would pose a threat to site-level comparisons if the underlying causes of non-response differed markedly by site. A multifaceted analysis of non-response found no evidence of meaningful differences in the nature or extent of non-response by site. A common set of factors was found to predict non-response, and differences in response rates were found to be approximately one-fifth the magnitude that would be required for a significant bias to be imposed. Overall, the analyses indicated that the effect of non-response on the statewide results was to raise all scores somewhat, and slightly narrow the observed differential among sites.


Go To Top of Page