Std prevention program evaluation study




















However, all Evaluation Center staff were required to complete CITI Training modules in human subjects research and the data manager developed a one-page list of protected health information PHI guidelines for sites, which included NU requirements for accepting completely de-identified data. Using the data collection tools created by the sites, the Evaluation Center coached field staff through the process of collecting quality data from program clients.

While three sites were able to use data reporting tools e. For sites that did not, the Evaluation Center created specialized Excel spreadsheets to enter and share their outcome data See Figure 1. Due to limited staff capacity, the Evaluation Center spent extensive time tailoring these spreadsheets to minimize the likelihood of data entry errors, which were frequent during the first few data shares.

Time and resources were spent educating delegate agency staff, via both a webinar and a guide developed by the Evaluation Center, on what constitutes protected health information PHI so that deidentified data could be shared. While most sites agreed to share this data with the Evaluation Center up front, one site requested a data use agreement to be signed before sharing data, so the Evaluation Center worked to facilitate that process.

Starting on a monthly basis, agencies shared their updated spreadsheets, or their most recent data export from REDCap, containing inception to present information.

The Evaluation Center data manager would check these data for errors and give feedback to sites. Eventually, as sites shared higher quality data, these shares were reduced to a quarterly basis to reduce time burden on project staff. In addition to overarching outcome analyses that will be explored in future articles, the Evaluation Center provided sites with immediate feedback on their program performance. One way the Evaluation Center accomplished this was by building basic formulas and visualizations into the site data collection spreadsheets See Figure 1.

Additionally, beginning in Year Three of the funding cycle, the Evaluation Center began creating comprehensive outcome data reports that used data visualization techniques to highlight program performance See Figure 2.

These visually appealing reports were provided to sites and were used to make programmatic decisions, call out potential data collection and entry issues, and to begin conversations surrounding dissemination of results.

As mentioned previously, the Evaluation Center placed an emphasis on applying each of the 10 principles to the evaluation approach to ensure fidelity to EE, given the deviation from the traditional approach. The following section will outline how the Evaluation Center was able to utilize and highlight each principle within our work with delegate agencies.

Descriptions of each principle can be found in Table 2. The Evaluation Center viewed the entire project as an opportunity for continuous improvement within delegate agencies, CDPH, and the Evaluation Center itself. For delegate agencies, this entailed improving evaluation capacity through individual and overarching technical assistance, as well as improving interventions by using process and outcome data to make programmatic changes.

For example, one agency had difficulty collecting and sharing quality data early in the project, but after extensive technical assistance from the Evaluation Center, they improved their processes significantly. This resulted in a clean, analyzable dataset that was presented to a supervisor and resulted in the receipt of additional funding to scale up the program. At the request of CDPH, the Evaluation Center used these outcome data, along with their experience working with the agencies, to inform how CDPH interacted with delegate agencies to extend beyond their traditional role of contract manager and scope reviewer to supporting program integration into agency systems.

Similarly, CDPH used preliminary findings presented by the Evaluation Center to structure future funding announcements to maximize their effectiveness. The Evaluation Center sought to use this project to improve its understanding of the landscape of HIV prevention in Chicago, as well as its ability to effectively engage with agencies with different levels of experience.

As mentioned, the Evaluation Center placed community ownership central to this approach from the beginning. This principle was achieved by facilitating the process for program field staff and project management to take control of the entire evaluation. Additionally, the sites were completely in charge of data collection and entry, with the Evaluation Center simply reviewing their work and sharing best practices to improve data quality. There were multiple occasions that the Evaluation Center would have liked to standardize logic models and data collections tools across projects, but instead supported decisions on how to ask questions in a way that was understandable and appropriate to their community.

By serving as evaluation coaches, the Evaluation Center allowed sites to gain meaningful experience and truly own their evaluation. The Evaluation Center worked hard to ensure both field-level staff and director-level staff were engaged in all evaluation activities by stressing the importance of having both types of individuals involved at the launch meeting.

Field staff and program directors represent very different skill sets and experiences, so ensuring individuals from each level were included led to the development of quality, culturally relevant evaluation materials that could be integrated into the day-to-day activities of the program. By including field staff, who work most closely with the clients, data collection tools were tailored to the specific subpopulation s being served.

By involving program directors, recommendations made based on findings were more likely to be implemented by the organization. In instances when there was staff turnover at a delegate agency, which was particularly common among field staff, a replacement was immediately identified, rather than relying on the project director as the sole contact.

In all activities, the Evaluation Center attempted to facilitate an open dialogue with the sites and ensure the final decisions were based on input from all individuals. While the Evaluation Center steered individuals towards best practices of crafting survey questions and creating measures, the site was fully involved in all decision making. One specific example is a site that asked an extensive amount of questions, and the Evaluation Center recommended they pare down the survey.

However, the Evaluation Center avoided taking control of the decision making, and allowed the site to take the lead in identifying questions that were most important to them. While they finished the process with an extremely large questionnaire, their staff were happy and bought into the importance of collecting and sharing these data. Had the Evaluation Center made an executive decision to cut the instrument without including the site, the relationship could have been strained and the site likely would have felt less inclined to spend time collecting and sharing quality data.

By remaining in an evaluation coaching role and guiding them in a particular direction, the Evaluation Center was able to be a part of the decision making without discouraging their participation. A key factor in the use of this evaluation approach was emphasizing inclusivity and ensuring broad access to an intervention, particularly by marginalized populations identified by the delegate agency teams. Through their role as an evaluation coach, the Evaluation Center team was able to have targeted discussions and host a webinar on how to improve and diversify recruitment to reach subgroups to increase the ability of the resulting evaluation to truly measure improvements among the population being served.

By helping improve programming and increasing the ability of agency stakeholders to plan and implement programming in the future, the Evaluation Center played an active role in working toward social justice for marginalized populations and communities disproportionally affected by HIV.

Specifically, this evaluation partnership serves primarily sexual and gender minority individuals, as well as Black and Latinx communities.

For example, while Evaluation Center staff offered feedback on how to structure survey questions to accurately capture outcomes, delegate agencies made the final edits to ensure the data collection tools were culturally appropriate and used terms that were understandable by the community. Furthermore, sites were encouraged to work closely with the populations they served to help improve the process. For example, one agency consistently worked with a group of clients to help lead intervention and evaluation development.

They consulted with this group on all programmatic materials, from program activities to data collection instruments, before sending it to the Evaluation Center to finalize. For both the EBI adaptation and homegrown intervention development processes, the Evaluation Center attempted to inform all activities and decisions by providing access to the literature. Accordingly, the Evaluation Center helped them navigate the adaptation process to ensure there was fidelity to the core principles of the EBI, while also utilizing the staff knowledge of the community and the specific needs of PLWH.

In another instance, an agency chose to implement a homegrown intervention, but wanted to pull heavily from what was shown to be effective in different contexts. In this case, the Evaluation Center helped them document the process of integrating multiple successful interventions into a streamlined curriculum. The Evaluation Center also promoted the use of evidence-based strategies by ensuring the evidence collected throughout this evaluation was rigorous. To achieve this, they offered validated measures and other previously used data collection tools to inform their work developing their own surveys.

Through the previously mentioned feedback loop, the Evaluation Center was able to ensure all materials were based in evidence and were adapted in appropriate ways. The principle of capacity building was extremely important to CDPH, as they hoped that funding an evaluator would not only facilitate collection of quality data to assess program success but also create a community of organizations dedicated to learning and improvement.

This included the technical assistance provided to individual sites to develop evaluation materials, as well as overarching technical assistance that included developing and disseminating webinars. To further show their dedication to this principle, the Evaluation Center focused one element of their overarching evaluation plan on measuring change in capacity via the Evaluation Capacity Survey.

The Evaluation Center used the results from this survey to inform future activities, and further place a focus on building capacity at the organizations.

This approach sought to help organizations learn from their experiences, build on successes, and make mid-course corrections. For example, once quality outcome data were submitted to the Evaluation Center, the team created data visualization reports intended to start conversations about interpreting and using these data to improve programming.

Some of these reports were able to highlight outcomes not achieving significant improvement. As mentioned previously, the adapted EE model used by the Evaluation Center placed an emphasis on helping agencies learn to deliberately link program activities to intended outcomes. Together this allowed sites to attempt to change specific programmatic activities to try to address these shortcomings. Additionally, with each discussion about evaluation materials and programmatic changes, the Evaluation Center attempted to steer the discussions to lessons learned and how the sites could use these same processes in the day-to-day operations of their organizations to evaluate and improve all programming.

Another key example of organizational learning is our work with CDPH. Through monthly and quarterly meetings with various stakeholders, we were able to share information about the project to help their team learn about how to better structure their funding and monitoring mechanisms moving forward. The Evaluation Center has also put a focus on working with sites to disseminate findings from these projects to extend organizational learning beyond the individual agencies.

These processes, combined with the site specific and overarching technical assistance activities and our collaboration with CDPH, created a community of learning across the project teams. This will allow for future organizational learning to occur as well, given the value of communicating programmatic success and challenges and engaging in conversation with diverse audiences present at conferences, meetings, and webinars. Prior CDPH funding initiatives only required delegate agencies to report information about achievement of project scopes e.

However, this project marked the start of focusing on outcomes and mutual accountability from all parties. This approach helped make this transition possible, as the Evaluation Center focused on encouraging sites to think logically about their program activities. Public Health Nursing. Third-party billing for public health STD services. STD-related reproductive health training and technical assistance center.

Public Health Law Research. Berlan ED, Bravender T. Confidentiality, consent, and caring for the adolescent patient. Curr Opin Pediatr. Patient perspectives of medical confidentiality: a review of the literature. J Gen Intern Med. Specialized family planning clinics in the United States: why women choose them and their role in meeting women's health care needs. Family planning clinic patients: their usual health care providers, insurance status, and implications for managed care. J Adolesc Health.

Accessed on November 20, Confidentiality for individuals insured as dependents: a review of state laws and policies. Chesson HW. Sexonomics: A commentary and review of selected sexually transmitted disease studies in the economics literature. An investigation of the effects of alcohol policies on youth STDs.

Thomas JC, Torrone E. Incarceration as forced migration: effects on selected community health outcomes. Understanding and responding to disparities in HIV and other sexually transmitted infections in African Amercians. The Lancet. Impact of public housing relocations: are changes in neighborhood conditions related to STIs among relocators?

Using a health in all policies approach to address social determinants of sexually transmitted disease inequities in the context of community change and redevelopment. Public Health Reports. Evidence-based public health: moving beyond randomized trials. A societal outcomes map for health research and policy.

Old myths, new myths: challenging myths in public health. Macinko J, Silver D. Improving state health policy assessment: an agenda for measurement and analysis. Support Center Support Center. External link. Please review our privacy policy. Cramer et al 8. Myerson et al Hodge et al CDC Cramer et al Introcaso et al Mickiewicz et al Evaluation: documentation of EPT in electronic medical record.

Gift et al Evaluation: cost- effectiveness of reimbursement. Hollier et al Legal research methods; other data sources. Parece et al Burnstein et al Barry et al Evaluation: chlamydia prevalence community. Two communities with a jail and family planning clinic. Broad et al Peterman et al Prison screening similar to jails elsewhere. Evaluation: multifaceted intervention included policy component.

Two counties in Michigan Medicaid managed care providers. Screening increased from NCC-New York Three private providers western NY. McGrath et al Providers unaware of reimbursement for chlamydia screening were less likely to screen.

Evaluation: laws requiring insurance reimbursement. States with changes in laws compared to states without changes. Woods et al Evaluation: policies relationship to sexual risk taking. Probability survey of MSM living in 4 cities. Chlamydia screening: women — screen sexually active women age 24 years or younger and in older women who are at increased risk for infection Gonorrhea screening: women -- screen sexually active women age 24 years or younger and in older women who are at increased risk for infection Gonorrhea prophylactic medication: newborns — recommends prophylactic ocular topical medication for all newborns for the prevention of gonococcal ophthalmia neonatorum Cervical cancer screening; women ages 21 to 65 years — screen with cytology Pap smear every 3 years, or for women ages 30 to 65 years who want to lengthen the screening interval, screening with a combination of cytology and human papillomavirus HPV testing every 5 years Sexually transmitted infections counseling: intensive behavioral counseling for all sexually active adolescents and for adults who are at increased risk for sexually transmitted infections.

Syphilis screening: nonpregnant persons — screen persons increased risk for syphilis infection Syphilis screening: pregnant women — screen all pregnant women for syphilis infection. STI screening for all sexually active 11—21 year olds Screening for cervical dysplasia in all sexually active girls as part of a pelvic exam beginning within 3 years of onset of sexual activity or at age 21, whichever comes first.

Annual STI counseling for all sexually active women High-risk HPV DNA testing in women with normal cytology results, beginning at 30 years of age and occurring no more frequently than every 3 years. Health care system. Long et al Drainoni et al Ten middle schools were recruited to participate in this intervention trial. Five middle schools were randomized to the intervention group; and five middle schools were randomized to the control group. The study is also developing a model for obtaining community support for the development of HIV, STD and pregnancy prevention programs for middle school youth.

The primary hypothesis to be tested is: Students attending middle schools who receive a multi-component HIV, STD, pregnancy prevention intervention will postpone sexual activity or reduce levels of current sexual activity relative to those in the comparison condition.

The major dependent variables are proportion of students that are sexually active, and the proportion initiating sexual intercourse. Intentions to engage in sexual activity, number of times of unprotected sexual intercourse, and number of sexual partners will also be examined. Secondary hypotheses will examine the effect of the multi-component HIV, STD, and pregnancy prevention intervention on the student's knowledge, self-efficacy, attitudes, perceived norms, barriers, and communication with parents.

Keep it Real", consists of 12 lessons delivered in Grades 7 and 8. In each grade, the program integrates group-based classroom activities e. A life skills decision-making paradigm Select, Detect, Protect underlies the activities, teaching students to select personal limits regarding risk behaviors, to detect signs or situations that might challenge these limits, and to use refusal skills and other tactics to protect these limits.

Experimental: Control Group Standard sexual education curriculum Behavioral: Standard sexual education curriculum Control curriculum consists of standard sexual education. The primary hypothesis tested was that the intervention would decrease the number of adolescents who initiated sexual activity by the ninth grade relative to those in the comparison schools. Sexual activity was defined as participation in vaginal, oral, or anal sex.

Sexual activity questions were defined in advance and were worded in a gender-neutral manner to illicit responses for same and opposite-sex partners. Talk with your doctor and family members or friends about deciding to join a study.

To learn more about this study, you or your doctor may contact the study research staff using the contacts provided below. Funding was apportioned to each state, territory, or directly funded city based on the number of people reported to be living with an HIV diagnosis in that jurisdiction — a change from prior health department funding allocations that were based on AIDS cases, which was no longer the best indicator for burden of disease.

The next 5-year funding cycle, which began on January 1, , integrated HIV surveillance and prevention funding for health departments. Most importantly, it integrates HIV prevention and surveillance, allowing health departments to plan and execute more efficient, coordinated, and data-driven prevention efforts.

It prioritizes proven, cost-effective strategies with the greatest potential to reduce new infections, including:. As was done with the previous funding opportunity, CDC apportioned funding to each state, territory, or directly funded city based on the number of people reported to be living with an HIV diagnosis in that jurisdiction as of , the most recent year for which complete data were available see map.

In addition, allocations were based on the most recent known address for each person living with HIV rather than their residence at the time their infection was first diagnosed, to account for geographic mobility.



0コメント

  • 1000 / 1000