File size: 58,900 Bytes
a6e5fdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b643642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
b643642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
b643642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
b643642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
b643642
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
b643642
a6e5fdc
b643642
a6e5fdc
b643642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6e5fdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b643642
 
a6e5fdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b643642
a6e5fdc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:8095
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: >-
    First, states can clearly define which students are excluded from a cohort
    by transferring, and this definition should eliminate the possibility that a
    dropout will be counted as a transfer, as happens currently in Florida.
    Second, states should take steps to ensure the accuracy of a transfer code
    by requiring a transcript request or other confirmation step at the local
    level. Third, states should design an audit of the assignment of exit codes
    on an annual basis to ensure accuracy of the system as a whole, in addition
    to other editing and audit mechanisms. Fourth, states should group cohorts
    by birth year rather than the year in which they entered high school. There
    are several reasons for this last recommendation. Reporting graduation rates
    by birth cohort will eliminate the bias of differential retention rates. In
    addition, reporting graduation rates by birth cohort will eliminate any bias
    from differential placement of students transferring into a state's public
    high schools. With student-MIGRATION AND GRADUATION, PAGE 33 level
    databases, there is no significant cost to reporting graduation rates by
    birth cohort.

     Recently proposed grade-based graduation measures and a new age-based measure are all subject to bias from misestimating student migration, whether international, internal, or inter-sector. For one case, Virginia public schools in 2003, moving from an assumption of zero net migration or net-increment rates to 0.03 rates corresponds to changes in the graduation estimate between 6.6% and 10.7%, depending on the measure. In absolute terms, the various measures ranged from 63.2% to 83.5% given plausible net in-migration or net-increment rates between 0 and 0.03. Even relatively small changes in the assumed in-migration or net-increment rate, between 0.01 and 0.02, resulted in measurable drops of the graduation estimate between 2.2% and 3.6%, depending on the measure chosen. Florida's experience with longitudinal cohort graduation rates shows both the promise of the NGA compact on graduation rates and also the need for appropriate operationalization of definitions and steps to improve the technical adequacy of the information. Florida's rates are inflated because the graduation rate simultaneously eliminates responsibility for students who drop out and then immediately enroll in a GED program-and then credits public schools for the students who eventually earn a GED. Florida's database is also one with no confirmation or auditing of transfer codes.

     Finally, serious consideration needs to focus on the question of whether grade-based or age-based graduation rates are better. Most current school statistics report information by grade or grade cohort, including several recentlyproposed graduation-rate formulas and also the NGA compact and its progenitors (including Florida's official graduation rate). Yet grade-based graduation rates conflate grade level with cohort. Quasi-cohort methods that use ninth-grade enrollment statistics cannot distinguish first-time ninth-graders from repeaters.

     Longitudinal student databases such as Florida's cannot always determine the cohort to which a student transferring into the public schools truly belongs. Age is less vulnerable to such conflation problems, and any state with an accurate student database can report information by birth cohort (for longitudinal cohort rates) or by age (for period rates).

     Given the requirements in No Child Left Behind to calculate a graduation rate for every high school, it appears from the analysis here that there is no broadly-used measure currently able to estimate graduation with degree of precision at a state level, let alone at the school level. While the National Governors Association (2005c) compact on a longitudinal cohort rate is appropriate, at least in theory, in practice states that already have a longitudinal rate show some evidence of inflating graduation rates. The No Child Left Behind requirement is desirable but currently impossible to meet. Meeting the law requires a well-operated student registration system, a system where records of diplomas, enrollments, and transfers are all audited regularly to raise confidence in the accuracy of transfer and migration data.
  sentences:
  - >
    What are the risks associated with bone cement leakage during a surgical
    procedure?
  - |
    How can student migration impact graduation rate estimates?
  - >
    What are the potential therapeutic effects of ShK-186 despite its short
    circulating half-life?
- source_sentence: >-
    Toxoplasma gondii, the protozoan parasite distributed worldwide, is common
    among humans and a broad range of warm-blooded animals. 1 The main routes of
    human infection are by the consumption of raw or undercooked meat containing
    tissue cysts and ingestion of oocysts via other food products, water, or
    vegetables. 2 Congenital infection can occur by vertical transmission of
    rapidly dividing T. gondii tachyzoites during pregnancy. 3 Prenatal
    infection leads to an increased risk of spontaneous abortion,
    chorioretinitis, or serious neurodevelopmental disorders such as
    hydrocephaly and microcephaly. 3 Although T. gondii infection is benign in
    immunocompetent individuals, it is life threatening in congenital form and
    in immunocompromised patients due to reactivation of the infection. 4
    Therefore, accurate diagnosis of acute maternal toxoplasmosis in
    immunocompromised patients and pregnant women is critical.

     Mohammadpour et al depends on meat cooking habits, socioeconomic status, and geographical conditions such as temperature and humidity. 6, 7 In Iran, seroprevalence ranged from 14.8% to 66% with typically increasing level according to age, and the overall seroprevalence rate of toxoplasmosis among the Iranian general population was 39.3%. 8 Prolactin (PRL) hormone is secreted by pituitary gland which is located below the cerebral cortex. Low levels of this hormone are secreted in blood of female and male individuals and the secretion is under control by PRL inhibitory factors such as dopamine. 9 Hyperprolactinemia is a situation in which large amounts of PRL exist in blood of men and pregnant women. The role of PRL has been proven in immune system as PRL receptors are located on the surface of B and T lymphocytes and macrophages and production of cytokines such as tumor necrosis factor alpha (TNF-α), interferon γ (IFNγ), and interleukin-12 (IL-12) are induced by this hormone. 10 The inhibitory effects of PRL on proliferation of T. gondii in mononuclear cells of individuals with high levels of PRL have been shown previously. 11 The present study was carried out to assess the possible relation between serum PRL levels and frequency of T. gondii infection in humans.

     Men and women aged 15-58 years with no clinical complications participated in this cross-sectional study. A total of 343 blood samples were collected from individuals who had been referred for PRL measurement in medical diagnostic laboratories in Karaj, Iran, from April to September 2016. Demographic characteristics such as sex, age, marital status, and current pregnancy status were recorded through questionnaires. Woman participants who were pregnant/nursing were excluded from the current study. Then, 3 mL of whole blood samples were collected from each of them; the sera were separated and stored at -20°C until use. After collecting samples, concentration of PRL was measured and the samples were divided into cases with high or low levels of PRL and comparison group with normal levels of PRL.

     ELISA was designed to detect anti-Toxoplasma IgG antibody in blood sera. The cutoff values of ODs were calculated according to Hillyer et al. 12 The OD of each sample was compared with the cutoff and recorded as positive or negative result. The cutoff value with 95% CI was determined to be 0.45 for the detection of anti-T. gondii IgG.

     Tachyzoites of T. gondii, RH strain was maintained in BALB/c mice with serial passages. 13 Tachyzoites that had been inoculated in peritoneal cavity of BALB/c mice were harvested by peritoneal washing with PBS (pH 7.2). The tachyzoites were washed two times with cold PBS, sonicated, and centrifuged at 4°C, 14,000×g for 1 hour. Then, supernatant was collected as soluble antigen, and the protein concentration was determined by Bradford method.

     Microtiter plates were coated with soluble antigens of T. gondii, RH strain. Sera were added in dilution of 1:100 in PBS followed by incubation and washing. Anti-human IgG conjugated with horseradish peroxidase (HRP; Dako Denmark A/S, Glostrup, Denmark) was added after incubation. After washing, chromogenic substrate ortho-phenyline-diamidine (OPD) was added and the reaction was stopped by adding sulfuric acid. The optical density was read and recorded by an automated ELISA reader at 490 nm.
  sentences:
  - >
    How does hyperprolactinemia affect the immune response to Toxoplasma gondii
    infection?
  - |
    What is the main cause of anemia in patients with chronic kidney disease?
  - >
    How does stimulus efficacy in a pacemaker depend on the charge density and
    rate of delivery?
- source_sentence: >-
    From the post-mortem ultrasound, the ventriculomegaly was well depicted
    before the MRI was performed, and a corpus callosum was in fact present
    (white arrows) thought to be related to increased maternal risk factors such
    as diabetes, body mass index, assisted reproductive techniques and alcohol
    consumption. These are key aspects of the mother's clinical history which
    should be available at post-mortem imaging.

     At PMUS the detection of complex cardiac anomalies is difficult due to a combination of lack of circulating blood, intra-cardiac haemostasis and occasionally intra-cardiac gas (likely from feticide [39] ). Some distortion of the normal anatomy at post-mortem examination can be overcome by imaging the foetus in a waterbath [9] .

     The commonest cardiac anomalies at termination of pregnancy are hypoplastic right/left heart syndrome [31] and uni-ventricular heart defects [40] . Other pathologies also feature, although less commonly, and include pulmonary atresia/stenosis ( Fig. 10) , aortic valve atresia/ stenosis, transposition of the great arteries, tetralogy of Fallot, coarctation of the aorta, anomalous pulmonary venous return and septal defects (Fig. 11) .

     Where cardiac imaging is non-diagnostic at PMUS, further cross-sectional imaging with PMMR may be useful, particularly if high-resolution, isovolumetric sequences for multiplanar reconstructions are acquired, given the post-mortem distortion of normal anatomy due to 'slumping'.

     Congenital pulmonary anomalies are the least common structural abnormalities seen at PMUS [25] . Whilst congenital pulmonary malformations (including cystic malformations, bronchopulmonary sequestrations, bronchial atresia, congenital lobar emphysema and bronchogenic cysts) may all be seen in live children, these are rarely the cause for foetal demise or terminations of pregnancy [41, 42] . In our experience, we have not detected any airway or lung malformations on PMUS, although Kang et al. [25] report one autopsy confirmed case of a bronchopulmonary foregut malformation in their series, which was missed on PMUS. It could have been due to the subtlety of the appearances that lead to the miss on PMUS; however, the medical literature is sparse with regards to the ideal post-mortem imaging of congenital pulmonary malformations.

     The commonest finding at post-mortem imaging of the lungs is lung hypoplasia, usually secondary to other Fig. 4 Post-mortem ultrasound images of the brain, in coronal section (top row), with matched T2-weighted post-mortem MRI images (bottom row) in a foetus at 25 weeks gestation. The pregnancy was terminated for suspected brain anomalies. Both imaging modalities were performed 2 days after delivery. The images demonstrate views through the frontal lobes (a, d), at the level of the Foramen of Monroe (b, e) and through the posterior horns of the lateral ventricles (c, f). The ultrasound image clearly depicts an interhemispheric cyst (white arrow) with internal septations (c), and there is absence of the corpus callosum. This is also evident from the MRI image (f), although the cyst is much better viewed on ultrasound intra-abdominal pathologies such as congenital diaphragmatic hernias (Fig. 12) or enlarged polycystic kidneys. Excluding pulmonary infection is not currently possible [9] given that the foetal and early neonatal lungs are normally fluid filled.

     Abnormalities of the abdomen seen at PMUS are most commonly related to the urinary tract or abdominal wall, the latter including pathologies such as gastroschisis, omphalocele ( Fig. 13 ) and congenital diaphragmatic hernia (Fig. 12) [25, 26, 43] . Whilst the presence of an anterior abdominal wall defect does not require ultrasound for diagnosis, the resultant distortion and shift of intra-abdominal organs may have made prenatal imaging difficult and therefore examination of the presence of internal structures is the main criteria for imaging these cases.

     Congenital intra-abdominal foetal tumours are very rare but may occur in the liver (such as haemangiomas, mesenchymal hamartoma and hepatoblastomas), kidneys (mesoblastic nephroma), pelvis (sacrococcygeal teratoma) or adrenal gland (neuroblastoma) [44] . We have previously identified splenic metastases from an aggressive primary fibrosarcoma (Fig. 14) and a suprarenal cystic mass secondary to in utero adrenal haemorrhage (Fig.
  sentences:
  - >
    What are some of the factors that diabetic patients must consider in their
    daily self-care routine?
  - >
    What are the most common cardiac anomalies found during termination of
    pregnancy?
  - >
    What are the most common treatment-related adverse events associated with
    mAbs that target the PD-1/PD-L1 pathway?
- source_sentence: >-
    In children without meningeal inflammation, 22 to 30% of a concomitant level
    in serum is achieved in the CSF of children after treatment with
    alatrovafloxacin, the intravenous form of trovafloxacin (8) . These levels
    are in excess of the concentrations of trovafloxacin needed to inhibit S.
    pneumoniae in vitro. Clinical studies may prove that trovafloxacin is an
    appropriate alternative agent for pneumococcal meningitis.

     The use of adjunctive dexamethasone in addition to antibiotics for the treatment of pneumococcal meningitis remains somewhat controversial (127) . The number of patients with pneumococcal meningitis enrolled in randomized trials of dexamethasone versus placebo was relatively small, and the timing of dexamethasone administration was not standardized in these studies. In two studies conducted in Turkey and Egypt, dexamethasone was associated with decreased hearing loss (66, 77) . For the largest number of children with pneumococcal meningitis (n ϭ 33) enrolled in a single study in the United States, bilateral hearing loss (3 of 11 children) was no different in the dexamethasone-treated children than in those receiving placebo (2 of 20) (147) . However, this study has been criticized because dexamethasone was not given routinely just before or concomitant with the first dose of parenteral antibiotics. Nevertheless, in this study, dexamethasone was associated with a significant reduction in hearing loss for children with meningitis due to Haemophilus influenzae type b. In addition, inflammatory parameters were diminished to an equivalent degree in experimental pneumococcal meningitis when dexamethasone was administered 30 min before or 60 min after ampicillin treatment (89) . In a retrospective analysis of children with pneumococcal meningitis, Arditi et al. (9) found no benefit with respect to hearing loss for children receiving dexamethasone either before or up to 1 h after the first dose of parenteral antibiotics compared with children never receiving any dexamethasone. A recent meta-analysis of randomized clinical trials of dexamethasone as adjunctive therapy in bacterial meningitis has concluded that "if commenced with or before parenteral antibiotics, (available evidence) suggests benefit for pneumococcal meningitis in childhood" (95) .

     The Committee on Infectious Diseases of the American Academy of Pediatrics recommends that dexamethasone should be considered for the treatment of infants and children with pneumococcal meningitis (5) . There are also uncertainties about the value of dexamethasone use in adults, and even fewer studies have been performed with adults than with children. Some experts recommend dexamethasone for adults with meningitis with a positive Gram stain of CSF (suggestive of a high concentration of bacteria in the CSF) and evidence of increased intracranial pressure (122) .

     For any patient who is not improving as expected or who has a pneumococcal isolate for which the cefotaxime or ceftriaxone MIC is Ն2.0 g/ml, a repeat lumbar puncture 36 to 48 h after initiation of therapy is recommended to document sterility of the CSF. This is particularly crucial for patients who are receiving adjunctive dexamethasone therapy, since they may appear to be responding to antibiotic therapy with a decrease in fever despite the CSF remaining culture positive (44) .

     The management of pneumococcal bacteremia due to antibiotic-resistant isolates is not as well formulated as it is for VOL. 11, 1998 ANTIBIOTIC-RESISTANT S. PNEUMONIAE INFECTIONSmeningitis. Although pneumococcal bacteremia without a source is a relatively common invasive bacterial infection in children, only a few studies focusing on treatment outcome of infections due to isolates intermediate or resistant to penicillin or to cefotaxime and ceftriaxone have been performed. Table  3 compiles those cases from various reports that provide enough detail regarding treatment and outcome. For the majority of reported patients, resistance to penicillin is of the intermediate variety and the outcome of therapy certainly does not predict outcomes for patients whose isolates have greater resistance. Rarely have treatment failures been reported or documented for penicillin-nonsusceptible pneumococcal isolates. Breakthrough pneumococcal bacteremia and meningitis were documented in a normal 18-month-old child after receiving cefotaxime (180 mg/kg/day) for 2 days and subsequently receiving cefuroxime (200 mg/kg/day) for 4 days (27) .
  sentences:
  - >-
    What is the role of NOD2 in the immune response and the pathogenesis of
    inflammatory bowel disease (IBD)?
  - |
    What are the characteristics of biliary diseases in elderly patients?
  - >
    What are the potential benefits and risks of using dexamethasone as an
    adjunctive therapy for pneumococcal meningitis in children?
- source_sentence: >-
    Three had personal and family issues to attend to. The peer counsellors
    presented their reports which were then discussed with the supervisors.

     At the beginning of the training some peer counsellors were hoping to be trained as health workers while others wanted to learn how to improve breastfeeding of their babies. Some suggested that they receive uniforms to identify them in the community. The peer counsellors expressed a strong wish to be given bicycles to ease their mobility around the villages and a monthly allowance equivalent to US$10. Transportation was the most "felt need" identified by the peer counsellors. One peer counsellor said,

     Another peer counsellor said,

     The peer counsellors were each given a bicycle for ease of movement during peer counselling visits.

     Lessons learnt from this study are summarised in Table 3 .

     This study showed that rural Ugandan women with modest formal education can be trained in breastfeeding counselling successfully. On returning to their communities, they were able to provide help and support to breastfeeding mothers to improve their breastfeeding technique and breastfeed exclusively. This is in agreement with what other studies have found [20] [21] [22] .

     The peer counsellors expressed a desire to learn more about breastfeeding at the beginning of the course. This was despite breastfeeding being culturally accepted and widely practiced in the community. The peer counsellors believed that breast milk alone was not enough for a baby up to the age of six months. A similar belief was also perceived at the lactation clinic of Mulago hospital in Uganda [31] . The training curriculum covered all the questions asked by the peer counsellors at the beginning of the course. This gave the peer counsellors the confidence that they would be able to answer questions posed by their peers. Since we did not administer pre-and post-test during training, our assessment of the knowledge they gained from the training is limited.

     We also found that there are cultural and traditional beliefs and practices regarding breastfeeding which may influence the practice of exclusive breastfeeding negatively. Beliefs and practices related to expressing breast milk, use of colostrum together with understanding and managing breast conditions during breastfeeding may not be supportive of exclusive breastfeeding. Other studies have also highlighted traditional and cultural beliefs and practices related to breastfeeding that may negatively influence the practice of exclusive breastfeeding [7] [8] [9] .

     At the beginning of the training for health workers, they were asked what they expected to learn from the training course. A list of their expectations was made and it was interesting to note that most of the expectations of the health workers were similar to those of the peer counsellors at the beginning of training. This suggests that community women could perform as well as, or even better than the health workers in supporting mothers to exclusively breast feed their babies. However, we did not compare the performance of the two groups in this study.

     The peer counsellors were also able to identify common breastfeeding problems in their communities. They appreciated the fact that the training they received had empowered them with skills to help the mothers overcome these problems. The commonly identified breastfeeding problems included "not enough breast milk", sore nipples and mastitis as well as identifying poor positioning of a baby at the breast. This was also reported in a previous hospital based study in Uganda [31] .

     We further observed that follow-up of the peer counsellors in their communities helped to motivate them so that they neither failed nor lost their confidence. Follow up supervision served as a way of addressing the challenges the peer counsellors met in their work and this was appreciated. It provided a mechanism for continued training for them as well sharing their experiences with each other and their supervisors. They were able to consult where they encountered difficulties. This interaction provided an avenue for the supervisors to re-enforce some information and skills which were observed to be deficient while observing the peer counsellors at work. Often the peer counsellors were able to suggest solutions during meetings which boosted their confidence further. This also added to their credibility with the mothers. This is similar The Intervention  Training rural women as peer counsellors for support of exclusive breastfeeding is feasible  Introducing an activity in a community can be a long process requiring multiple visits starting with the district down to the lowest level to ensure community involvement.
  sentences:
  - >
    What is the advantage of intra-arterial administration of cisplatin for the
    treatment of squamous cell carcinoma (SCC) in the head and neck region?
  - >
    How did the follow-up supervision of the peer counsellors contribute to
    their success in supporting breastfeeding mothers?
  - |
    How does statin use affect the mortality rates of CDI patients?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: ModernBERT Embed base miriad
  results:
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: dim 768
      type: dim_768
    metrics:
    - type: cosine_accuracy@1
      value: 0.6655
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.9045
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.9455
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.9695
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.6655
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.3015
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.18910000000000002
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.09695000000000002
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.6655
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.9045
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.9455
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.9695
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.8327188716379244
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.7870591269841258
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.7883513963047813
      name: Cosine Map@100
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: dim 512
      type: dim_512
    metrics:
    - type: cosine_accuracy@1
      value: 0.668
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.9
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.943
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.9675
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.668
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.29999999999999993
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.18860000000000002
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.09675000000000003
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.668
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.9
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.943
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.9675
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.8309776210344206
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.7855371031746019
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.7869116138238026
      name: Cosine Map@100
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: dim 256
      type: dim_256
    metrics:
    - type: cosine_accuracy@1
      value: 0.6435
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.891
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.933
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.964
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.6435
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.29699999999999993
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.18660000000000002
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.0964
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.6435
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.891
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.933
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.964
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.8178337204291636
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.7692660714285701
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.7707133076297497
      name: Cosine Map@100
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: dim 128
      type: dim_128
    metrics:
    - type: cosine_accuracy@1
      value: 0.637
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.8665
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.9105
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.946
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.637
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.28883333333333333
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.1821
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.0946
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.637
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.8665
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.9105
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.946
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.8028123299777913
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.7555621031746026
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.7576366680017745
      name: Cosine Map@100
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: dim 64
      type: dim_64
    metrics:
    - type: cosine_accuracy@1
      value: 0.568
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.8155
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.865
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.9165
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.568
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.2718333333333333
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.173
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.09165000000000001
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.568
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.8155
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.865
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.9165
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.7516283698242127
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.6977043650793642
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.700687781526774
      name: Cosine Map@100
datasets:
- miriad/miriad-4.4M
---

# ModernBERT Embed base miriad

This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
    - 8K Data from [miriad-4.4M](https://huggingface.co/datasets/miriad/miriad-4.4M)
- **Language:** en
- **License:** apache-2.0

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)

### Full Model Architecture

```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
```

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash
pip install -U sentence-transformers
```

Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("digo-prayudha/test-modernbert-embed-base-miriad")
# Run inference
sentences = [
    'Three had personal and family issues to attend to. The peer counsellors presented their reports which were then discussed with the supervisors.\n\n At the beginning of the training some peer counsellors were hoping to be trained as health workers while others wanted to learn how to improve breastfeeding of their babies. Some suggested that they receive uniforms to identify them in the community. The peer counsellors expressed a strong wish to be given bicycles to ease their mobility around the villages and a monthly allowance equivalent to US$10. Transportation was the most "felt need" identified by the peer counsellors. One peer counsellor said,\n\n Another peer counsellor said,\n\n The peer counsellors were each given a bicycle for ease of movement during peer counselling visits.\n\n Lessons learnt from this study are summarised in Table 3 .\n\n This study showed that rural Ugandan women with modest formal education can be trained in breastfeeding counselling successfully. On returning to their communities, they were able to provide help and support to breastfeeding mothers to improve their breastfeeding technique and breastfeed exclusively. This is in agreement with what other studies have found [20] [21] [22] .\n\n The peer counsellors expressed a desire to learn more about breastfeeding at the beginning of the course. This was despite breastfeeding being culturally accepted and widely practiced in the community. The peer counsellors believed that breast milk alone was not enough for a baby up to the age of six months. A similar belief was also perceived at the lactation clinic of Mulago hospital in Uganda [31] . The training curriculum covered all the questions asked by the peer counsellors at the beginning of the course. This gave the peer counsellors the confidence that they would be able to answer questions posed by their peers. Since we did not administer pre-and post-test during training, our assessment of the knowledge they gained from the training is limited.\n\n We also found that there are cultural and traditional beliefs and practices regarding breastfeeding which may influence the practice of exclusive breastfeeding negatively. Beliefs and practices related to expressing breast milk, use of colostrum together with understanding and managing breast conditions during breastfeeding may not be supportive of exclusive breastfeeding. Other studies have also highlighted traditional and cultural beliefs and practices related to breastfeeding that may negatively influence the practice of exclusive breastfeeding [7] [8] [9] .\n\n At the beginning of the training for health workers, they were asked what they expected to learn from the training course. A list of their expectations was made and it was interesting to note that most of the expectations of the health workers were similar to those of the peer counsellors at the beginning of training. This suggests that community women could perform as well as, or even better than the health workers in supporting mothers to exclusively breast feed their babies. However, we did not compare the performance of the two groups in this study.\n\n The peer counsellors were also able to identify common breastfeeding problems in their communities. They appreciated the fact that the training they received had empowered them with skills to help the mothers overcome these problems. The commonly identified breastfeeding problems included "not enough breast milk", sore nipples and mastitis as well as identifying poor positioning of a baby at the breast. This was also reported in a previous hospital based study in Uganda [31] .\n\n We further observed that follow-up of the peer counsellors in their communities helped to motivate them so that they neither failed nor lost their confidence. Follow up supervision served as a way of addressing the challenges the peer counsellors met in their work and this was appreciated. It provided a mechanism for continued training for them as well sharing their experiences with each other and their supervisors. They were able to consult where they encountered difficulties. This interaction provided an avenue for the supervisors to re-enforce some information and skills which were observed to be deficient while observing the peer counsellors at work. Often the peer counsellors were able to suggest solutions during meetings which boosted their confidence further. This also added to their credibility with the mothers. This is similar The Intervention • Training rural women as peer counsellors for support of exclusive breastfeeding is feasible • Introducing an activity in a community can be a long process requiring multiple visits starting with the district down to the lowest level to ensure community involvement.',
    'How did the follow-up supervision of the peer counsellors contribute to their success in supporting breastfeeding mothers?\n',
    'How does statin use affect the mortality rates of CDI patients?\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.7550, -0.0372],
#         [ 0.7550,  1.0000, -0.0165],
#         [-0.0372, -0.0165,  1.0000]])
```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Information Retrieval

* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
  ```json
  {
      "truncate_dim": 768
  }
  ```

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.6655     |
| cosine_accuracy@3   | 0.9045     |
| cosine_accuracy@5   | 0.9455     |
| cosine_accuracy@10  | 0.9695     |
| cosine_precision@1  | 0.6655     |
| cosine_precision@3  | 0.3015     |
| cosine_precision@5  | 0.1891     |
| cosine_precision@10 | 0.097      |
| cosine_recall@1     | 0.6655     |
| cosine_recall@3     | 0.9045     |
| cosine_recall@5     | 0.9455     |
| cosine_recall@10    | 0.9695     |
| **cosine_ndcg@10**  | **0.8327** |
| cosine_mrr@10       | 0.7871     |
| cosine_map@100      | 0.7884     |

#### Information Retrieval

* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
  ```json
  {
      "truncate_dim": 512
  }
  ```

| Metric              | Value     |
|:--------------------|:----------|
| cosine_accuracy@1   | 0.668     |
| cosine_accuracy@3   | 0.9       |
| cosine_accuracy@5   | 0.943     |
| cosine_accuracy@10  | 0.9675    |
| cosine_precision@1  | 0.668     |
| cosine_precision@3  | 0.3       |
| cosine_precision@5  | 0.1886    |
| cosine_precision@10 | 0.0968    |
| cosine_recall@1     | 0.668     |
| cosine_recall@3     | 0.9       |
| cosine_recall@5     | 0.943     |
| cosine_recall@10    | 0.9675    |
| **cosine_ndcg@10**  | **0.831** |
| cosine_mrr@10       | 0.7855    |
| cosine_map@100      | 0.7869    |

#### Information Retrieval

* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
  ```json
  {
      "truncate_dim": 256
  }
  ```

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.6435     |
| cosine_accuracy@3   | 0.891      |
| cosine_accuracy@5   | 0.933      |
| cosine_accuracy@10  | 0.964      |
| cosine_precision@1  | 0.6435     |
| cosine_precision@3  | 0.297      |
| cosine_precision@5  | 0.1866     |
| cosine_precision@10 | 0.0964     |
| cosine_recall@1     | 0.6435     |
| cosine_recall@3     | 0.891      |
| cosine_recall@5     | 0.933      |
| cosine_recall@10    | 0.964      |
| **cosine_ndcg@10**  | **0.8178** |
| cosine_mrr@10       | 0.7693     |
| cosine_map@100      | 0.7707     |

#### Information Retrieval

* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
  ```json
  {
      "truncate_dim": 128
  }
  ```

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.637      |
| cosine_accuracy@3   | 0.8665     |
| cosine_accuracy@5   | 0.9105     |
| cosine_accuracy@10  | 0.946      |
| cosine_precision@1  | 0.637      |
| cosine_precision@3  | 0.2888     |
| cosine_precision@5  | 0.1821     |
| cosine_precision@10 | 0.0946     |
| cosine_recall@1     | 0.637      |
| cosine_recall@3     | 0.8665     |
| cosine_recall@5     | 0.9105     |
| cosine_recall@10    | 0.946      |
| **cosine_ndcg@10**  | **0.8028** |
| cosine_mrr@10       | 0.7556     |
| cosine_map@100      | 0.7576     |

#### Information Retrieval

* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
  ```json
  {
      "truncate_dim": 64
  }
  ```

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.568      |
| cosine_accuracy@3   | 0.8155     |
| cosine_accuracy@5   | 0.865      |
| cosine_accuracy@10  | 0.9165     |
| cosine_precision@1  | 0.568      |
| cosine_precision@3  | 0.2718     |
| cosine_precision@5  | 0.173      |
| cosine_precision@10 | 0.0917     |
| cosine_recall@1     | 0.568      |
| cosine_recall@3     | 0.8155     |
| cosine_recall@5     | 0.865      |
| cosine_recall@10    | 0.9165     |
| **cosine_ndcg@10**  | **0.7516** |
| cosine_mrr@10       | 0.6977     |
| cosine_map@100      | 0.7007     |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### json

* Dataset: json
* Size: 8,095 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
  |         | positive                                                                              | anchor                                                                           |
  |:--------|:--------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
  | type    | string                                                                                | string                                                                           |
  | details | <ul><li>min: 467 tokens</li><li>mean: 944.9 tokens</li><li>max: 1460 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.7 tokens</li><li>max: 61 tokens</li></ul> |
* Samples:
  | positive                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       | anchor                                                                                                            |
  |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|
  | <code>24 If the dot is in a fixed location, it is called a laser Doppler flow meter. If the beam scans the skin, a large skin area can be scanned and a laser Doppler image of the skin surface reflecting blood flow can be seen. [25] [26] [27] These devices are called laser Doppler imagers. Another technique is laser speckle flow imagers. They project a constant speckle laser pattern on the skin to obtain rapid pictures of flow, typically 25 per second compared with 2 min using a laser Doppler imager. 28 The speed is a sacrifice for depth of penetration, which is less than 1/2 dermal thickness. Depending on laser frequency and power, all techniques have different areas they cover and different penetration into tissue.<br><br> There are numerous pros and cons to this technique. First, skin blood flow varies continuously because of vasomotor rhythm and respiration. Blood flow increases slightly during exhaling and is reduced slightly during inhalation. If flow is sampled too quickly, it may be high or...</code> | <code>How does the heated thermistor pair technique measure skin blood flow?<br></code>                           |
  | <code>126 -128 Furthermore, NGF is locally up-regulated in humans presenting with chronic pain, such as arthritis, migraine/headache, fibromyalgia, or peripheral nerve injury. 129 -132 These observations suggest that in humans, as in preclinical animal models, the ongoing production of NGF may be involved in chronic pain and changes in sensitization. Indeed, there are at least three major pharmacologic strategies under development that target NGF-TrkA signaling for the treatment of chronic pain and that have produced effective reduction in hypersensitivity in preclinical models. These are sequestration of NGF or inhibiting its binding to TrkA, 61, 133 antagonizing TrkA so as to block NGF from binding to TrkA, 134 -136 and blocking TrkA kinase activity. 137 Among the first such molecules to be investigated preclinically were a TrkA-IgG fusion protein, 138 MNAC13, 134 and PD90780, 136 which act by inhibiting the binding of NGF to TrkA and ALE0540, 135 which appears to act by modulating the int...</code>       | <code>How do humanized anti-NGF monoclonal antibodies exert their analgesic effect?<br></code>                    |
  | <code>It was not possible to correct the estimates for withinindividual variation in levels of the liver enzymes over time which may have underestimated the associations, because data involving repeat measurements were not reported by all the contributing studies. There are data to suggest that the levels of these enzymes in individuals can fluctuate considerably over time 61 ; hence, the associations demonstrated may be even stronger. Studies are therefore needed with serial measurements of these liver enzymes to be able to adjust for regression dilution bias.<br><br> There was substantial heterogeneity among the available prospective studies. Given this, it was debatable whether pooled estimates should be presented rather than reporting estimates in relevant subgroups, as the presence of heterogeneity makes pooling of risk estimates data somewhat controversial. We however systematically explored and identified the possible sources of heterogeneity using stratified analyses, meta-regression and s...</code> | <code>Are there geographical variations in the association between ALT levels and all-cause mortality?<br></code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
  ```json
  {
      "loss": "MultipleNegativesRankingLoss",
      "matryoshka_dims": [
          768,
          512,
          256,
          128,
          64
      ],
      "matryoshka_weights": [
          1,
          1,
          1,
          1,
          1
      ],
      "n_dims_per_step": -1
  }
  ```

### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: epoch
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch
- `batch_sampler`: no_duplicates

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}

</details>

### Training Logs
| Epoch   | Step   | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0     | 32     | -             | 0.8271                 | 0.8235                 | 0.8137                 | 0.7953                 | 0.7404                |
| 1.5692  | 50     | 0.1536        | -                      | -                      | -                      | -                      | -                     |
| 2.0     | 64     | -             | 0.8328                 | 0.8312                 | 0.8169                 | 0.8022                 | 0.7519                |
| **3.0** | **96** | **-**         | **0.8327**             | **0.831**              | **0.8178**             | **0.8028**             | **0.7516**            |

* The bold row denotes the saved checkpoint.

### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.0

## Citation

### BibTeX

#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
```

#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```

#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->