Foundations of the NBCOT Certification Examinations
Learn more about how the exams are developed, constructed, and scored.
Exam Development & Construction
The procedures we use to prepare the certification exams are consistent with the technical standards and guidelines recommended by the American Educational Research Association, the American Psychological Association, the National Council on Measurement in Education, the Institute for Credentialing Excellence, and the Association of Test Publishers. Our test development and administration procedures also adhere to relevant sections of the Uniform Guidelines on Employee Selection (EEOC, 1978). Our certification programs are accredited by the National Commission for Certifying Agencies (NCCA) under their Standards for the Accreditation of Certification Programs.
OTR and COTA certification exams are constructed using a combination of scored items and unscored (field-test) items. Items for each exam are selected from the respective item banks in alignment with the OTR and COTA exam content outlines. Scored items for each exam have been field-tested on a large sample of candidates and demonstrated acceptable statistics. All exams undergo a rigorous review and validation process with a committee of subject matter consultants to ensure they contain content reflective of current entry-level occupational therapy practice and that they are fair, unbiased, and accessible to all test takers.
Professional testing standards and regulations require that certification exam content is based on a study of the profession called a practice analysis or a job analysis. We conduct practice analyses every five years to identify the essential tasks performed by entry-level occupational therapy professionals and the knowledge required for proficient performance of those tasks. The results of the practice analyses are used to develop exam content outlines that guide content development for the OTR and COTA exams. Basing the exam content on a practice analysis ensures that the knowledge tested on the exam is linked directly to practice, thereby ensuring the validity of the exam.
In the first phase of the practice analysis process, we get input on current entry-level practice from a panel of OTRs and COTAs who represent the demographic and practice characteristics of our certificant populations. The primary role of these panels is to identify the essential tasks that define current entry-level practice and the knowledge required to perform those tasks successfully. In the second phase of the study, we survey thousands of entry-level OTR and COTA certificants to gather feedback on the tasks and knowledge identified by the panels. The survey respondents rate the importance of the identified tasks to entry-level practice and the frequency with which they perform the tasks. We also ask them to evaluate the knowledge required for proficient entry-level performance of the tasks. Finally, respondents are provided with an opportunity to identify critical elements of occupational therapy practice that are not captured within the survey. More detail about the practice analysis process is provided in the summary reports.
We conduct practice analyses for each credential every five years. The most recent analyses were conducted in 2022.
Every year, we recruit OTR and COTA certificants to develop new questions or “items” for the certification exams. Consistent with accreditation standards (e.g., NCCA, 2021), the subject matter consultants (SMCs) who develop our exam items are representative of the demographic and professional characteristics of the entry-level certificant population and are qualified to provide input on occupational therapy exam content. Before new items are used on an exam, they undergo an additional review by another committee of SMCs. This review is designed to validate that the items are aligned with the exam content specifications, relevant to current occupational therapy practice, and fair and accessible to all candidates.
We consider fairness at every stage of the exam development and administration process. We adhere to recognized item writing, exam development, and review procedures to ensure readability, neutral language, accessibility, and the accuracy of terms used in our items. Additional steps we take to promote fairness during the exam development process include, but are not limited to, the following activities:
- Editing and reviewing items for issues of bias, fairness, stereotyping, and accessibility
- Ensuring equivalent content of each exam form by building each form according to the exam blueprint
- Ensuring equivalent difficulty of every exam form by using statistical equating methods
- Referencing items to approved and published resources in occupational therapy
- Inviting all certificants to volunteer in exam development activities
- Selecting OTR and COTA practitioners and educators from diverse geographical areas, practice experiences, and cultures to serve as subject matter consultants
- Field-testing items prior to their use as scored items on the exam
- Conducting regular reviews of item and exam statistics
During the exam registration and administration process, fairness is addressed in the following ways:
- Using established criteria and standardized procedures
- Ensuring exams are accessible to all candidates
- Using trained, professional proctors
- Maintaining the security of exam materials and equipment
Fairness is addressed after the exam by considering confidentiality, scoring accuracy, and the timeliness of reporting the results.
We use many different quality control measures to maintain the fairness, integrity, reliability, and validity of each version of the exams. All OTR and COTA exams are constructed using a validated exam blueprint to ensure adequate content representation and equivalence across all versions of the exams. Additionally, a committee of subject matter consultants validates each item on an exam using specific criteria. Finally, the passing score on the certification exams is determined through a rigorous process that is widely used and accepted by testing and assessment experts. This method, called the Modified Angoff method (Angoff, 1971), is a way of determining the performance standard required for safe and competent occupational therapy practice, and then determining the number of exam questions candidates must answer correctly to demonstrate that they meet the performance standard. Future versions of the exam are then statistically equated to this standard to ensure that the passing standard remains constant over time, regardless of which version of the exam a candidate takes.
Exam Format
The COTA examination consists of single-response multiple-choice items and six-option multi-select items.
The single-response multiple-choice items contain a stem and three or four possible response options. Of the response options presented, there is only one correct or best answer. Candidates receive credit for selecting the correct response option. Points are not deducted for selecting incorrect response options.
The six-option multi-select items include a question stem followed by six possible response options. Of the options provided, three are correct responses and the other three are incorrect responses. The candidate must select three response options. Candidates receive credit for each correct response option selected. Points are not deducted for selecting incorrect response options.
Examples of multi-select items are available. These sample items do not replicate the candidate experience on exam day. Please view the exam tutorial for more information.
Candidates have four hours to complete the COTA exam. The exam consists of 190 multiple-choice and multi-select items. Each item is presented on the screen one at a time. Some of the items may include a picture or chart containing information needed to answer the question.
During the exam, candidates can highlight text to emphasize information. A strikeout feature is also available to help candidates visually eliminate response options or other content. Candidates can flag items for review and change their item responses as testing time allows or until they submit the exam. If time runs out before a candidate reviews the flagged items, the selected response(s) will be submitted for scoring. No credit will be given to flagged items that have no response option(s) selected. Candidates also have the ability to modify the color scheme by changing the background and text colors of the exam at any time, and they can increase and decrease the size of the font in the same manner that is common in an internet browser.
At the start of the exam, candidates have the option of taking a tutorial about the features and functionality of the exam platform. Time spent on the tutorial is not deducted from the four-hour test clock. During the exam, candidates can access the Help screens to revisit the information contained in the tutorial; however, the exam timer will continue to run.
Candidates can access the exam tutorial at any time before their exam day to become familiar with the exam platform.
The OTR exam consists of single-response multiple-choice items and six-option multi-select scenario sets.
The single-response multiple-choice items contain a stem and three or four possible response options. Of the response options presented, there is only one correct or best answer. Candidates receive credit for selecting the correct response option. Points are not deducted for selecting incorrect response options.
Each multi-select scenario set consists of an opening scene that contains general background information about a practice-related situation, followed by four related items. Each item is followed by six response options, three of which are correct and three of which are incorrect. Candidates must select the three best choices.
For all exam items, candidates receive credit for selecting the correct response options. There are no deductions for selecting incorrect response options.
We offer access to sample scenario sets so candidates can familiarize themselves with the structure of the item type. These sample items do not replicate the candidate experience on exam day. Please view the exam tutorial for more information.
For more information about the 2024 exams, please review the frequently asked questions.
Candidates have four hours to complete the OTR exam. The exam consists of a total of 180 items, including multiple-choice questions and multi-select scenario sets. Each item is presented on the screen one at a time. Some of the items may include a chart containing information needed to answer the question.
During the exam, candidates can highlight text to emphasize information. A strikeout feature is also available to help candidates visually eliminate response options or other content. Candidates can flag items for review and change their item responses as testing time allows or until they submit the exam. If time runs out before a candidate reviews the flagged items, the selected response(s) will be submitted for scoring. No credit will be given to flagged items that have no response option(s) selected. Candidates also have the ability to modify the color scheme by changing the background and text colors of the exam at any time, and they can increase and decrease the size of the font in the same manner that is common in an internet browser.
At the start of the exam, candidates have the option of taking a tutorial about the features and functionality of the exam platform. Time spent on the tutorial is not deducted from the four-hour test clock. During the exam, candidates can access the Help screens to revisit the information contained in the tutorial; however, the exam timer will continue to run.
Candidates can access the exam tutorial at any time before their exam day to become familiar with the exam platform.
The OTR and COTA exams are computer-delivered at testing centers located throughout the United States and internationally. Candidates can schedule their exam any day of the week during the business hours of the selected testing center. Scheduling instructions are provided in the candidate’s Authorization to Test letter.
We provide reasonable and appropriate accommodations for individuals with an ADA recognized disability/condition who submit required documentation. Refer to the Testing Accommodations page for more specific information.
Scoring
NBCOT certification exams are criterion-referenced. This means that only candidates who obtain a score equal to or greater than the minimum passing score will pass the exam. Overall performance is reported on a standardized scale ranging from 300 to 600. A total scaled score of at least 450 is required to pass the OTR or COTA certification exam. It is important to note that the passing standard is based on candidate performance across the entire exam. Pass/fail decisions are based only on the total number of exam questions answered correctly, and there are no domain-level passing standards.
A criterion-referenced test has a predefined minimum standard or criterion that all candidates must achieve to pass the exam. Because of the process used to determine the passing standard, achieving or exceeding the minimum score is a direct indicator of competence. On the other hand, norm-referenced tests compare the performance of a test taker to the average of a selected group of other test takers. Norm-referenced scoring does not compare a candidate’s score to an objective standard to determine competence; it only compares performance to that of other test takers. Because certification exams must demonstrate that candidates are competent to practice, criterion-referenced tests are used.
A scaled score is a mathematical conversion of the number of items that a candidate correctly answered (raw score) that is transformed so that a consistent scale is used across all forms of the exam. This transformation is like converting a weight from pounds to kilograms or a temperature from Celsius to Fahrenheit. The weight or temperature has not changed, only the units used to report them.
Each version (or “form”) of the exam is built to the same content specifications and tests the same domains of occupational therapy practice; however, each form contains a different selection of items. The set of items used on one exam form may vary in difficulty compared to the items that are used on another exam form. If one exam form is slightly harder than the other, achieving the same number or percentage of items correct does not have the same meaning on each form. Scaled scoring, along with a statistical process called equating, accounts for slight variations in form difficulty by ensuring that the same exam score has the same meaning across all forms. Reporting scaled scores is standard practice on certification exams and other standardized tests because it allows for direct comparisons of scores across different exam forms. A score of 450 means the same thing on every form of the exam.
The certification exams are criterion-referenced, meaning a candidate’s performance is compared to a predetermined minimum standard. In keeping with accreditation standards, we conduct standard setting studies to establish the passing standard for the OTR and COTA exams. The methodology used for the most recent standard setting studies, the Modified Angoff method (Angoff, 1971), requires panels of subject matter consultants to identify the minimum level of competency required to pass the exams (Cizek, 2012).
Candidates who pass the exam receive a congratulatory letter that includes the candidate’s earned exam score.
We also provide performance feedback to candidates who do not achieve a passing score on the exam. This performance feedback report includes the candidate’s score as well as the average score of new graduates who recently passed the exam. The report also includes a domain-level performance chart to help candidates identify areas of relative strength and weakness. Information about the appropriate interpretation and use of domain scores accompanies this section of the report. Finally, a list of frequently asked questions is provided to address queries regarding the determination of the passing score, use of scaled scores, candidate performance comparisons, score reporting, exam preparation, and exam content.
Sample OTR performance feedback report for candidate with failing score
Sample COTA performance feedback report for candidate with failing score
We use a variety of quality assurance procedures as part of our score verification process. Additional quality control measures relating to exam administration processes and procedures take place after administration of an exam and prior to sending an official feedback report to ensure candidates receive accurate information about their final scores.
Test Metrics
Each exam includes a preselected number of field-test items. Although these items are not considered when determining candidates’ scores, performance data is collected and analyzed for each field-test item after a large sample of candidates has seen the items. The statistical analysis of the field-test items is an important fairness and quality control measure we use to help ensure validity of exam scores. Field-testing ensures that only high-quality, statistically sound items are used to determine a candidate’s score. Field-test items are presented randomly throughout the exams. Candidates are not able to distinguish between the scored and field-test (non-scored) items.
We use Item Response Theory (IRT) methodology to analyze and calibrate exam items and to pre-equate exam forms. IRT statistics provide test developers with valuable information on the psychometric properties of each item in the item bank. Access to this information during test construction facilitates the selection of appropriate items for the new exam forms. Our psychometricians routinely evaluate the statistical performance of exam items against predetermined metrics. This process contributes to the validity and reliability of the certification exams.
References
Americans with Disabilities Act of 1990, Pub. L. No. 101-336, 104 Stat. 328 (1990).
Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (2nd ed.). American Council of Education.
Cizek, G. (2012). Setting performance standards: Foundations, methods, and innovations (2nd ed.). Routledge.
Equal Employment Opportunity Commission [EEOC]. (1978). The Office of Personnel Management, U.S. Department of Justice and U.S. Department of Labor (1979).Uniform guidelines on employee selection procedures. 41 CFR Part 603 (1978).
National Board for Certification in Occupational Therapy. (2023a). Practice analysis of the Certified Occupational Therapy Assistant: Executive summary [PDF]. https://www.nbcot.org/-/media/PDFs/2022_COTA_Practice_Analysis.pdf
National Board for Certification in Occupational Therapy. (2023b). Content outline for the COTA examination [PDF]. https://www.nbcot.org/-/media/PDFs/2022_COTA_Content_Outline.pdf
National Board for Certification in Occupational Therapy. (2023c). Practice analysis of the Occupational Therapist Registered: Executive summary [PDF]. https://www.nbcot.org/-/media/PDFs/2022_OTR_Practice_Analysis.pdf
National Board for Certification in Occupational Therapy. (2023d). Content outline for the OTR examination [PDF]. https://www.nbcot.org/-/media/PDFs/2022_OTR_Content_Outline.pdf
National Commission for Certifying Agencies. (2014). The NCCA's standards for the accreditation of certification programs. Institute for Credentialing Excellence.