[go: up one dir, main page]

WO2025059561A1 - System for extracting and quantifying statistical problems in documents - Google Patents

System for extracting and quantifying statistical problems in documents Download PDF

Info

Publication number
WO2025059561A1
WO2025059561A1 PCT/US2024/046747 US2024046747W WO2025059561A1 WO 2025059561 A1 WO2025059561 A1 WO 2025059561A1 US 2024046747 W US2024046747 W US 2024046747W WO 2025059561 A1 WO2025059561 A1 WO 2025059561A1
Authority
WO
WIPO (PCT)
Prior art keywords
statistical
layer
input text
embeddings
reported
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/046747
Other languages
French (fr)
Inventor
Daniel Ernesto Acuna
Elizabeth Ester Novoa MONSALVE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Colorado System
University of Colorado Denver
Original Assignee
University of Colorado System
University of Colorado Denver
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Colorado System, University of Colorado Denver filed Critical University of Colorado System
Publication of WO2025059561A1 publication Critical patent/WO2025059561A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • NHST science null-hypothesis significance testing
  • the default threshold for statistical significance in NHST is typically set at p ⁇ 0.05, indicating that the observed results are unlikely to occur by chance alone.
  • NHST continues to be widely used in scientific research, including fields such as psychology' and biology.
  • an NHST is reported in the text with several pieces of information besides the p-value, such as the test name, sample size, and test statistics. If any of the pieces of the test are manipulated, the rest of the statistical information can be used to “reverse-engineer.” Since the p-value is so central to deciding whether a hypothesis is rejected or accepted, the p-value is usually the focus of analyses. If there is a large discrepancy between the reported p-value and the recomputed one, there is suspicion of an error, mistake, or manipulation.
  • p-curves e.g., distributions of statistically significant p-values
  • Various tests for detecting such mistakes have been developed.
  • Disclosed embodiments include a named- entity recognition approach for identify ing, extracting, and judging statistical results from scientific or other texts.
  • Disclosed embodiments include computer systems and methods for analyzing documents with a natural language processor (NLP).
  • NLP natural language processor
  • the NLP may utilize artificial intelligence to identity statistical data and/or other mathematical data.
  • the NLP can extract the data from the documents and then the data can be validated to determine if the data is correct or internally consistent.
  • the data is analyzed and a probability of reliability is generated. For example, disclosed embodiments may determine that a particular set of data is only
  • Embodiments can detect, parse, and quantify the quality of statistical reports in documents with a significantly wider range of tests and with a significantly higher quality than conventional approaches.
  • Figure 1 illustrates a conceptual diagram showing operational aspects of a statistical detection model.
  • Figure 2 illustrates a conceptual diagram showing operational aspects of a statistical extraction model, which can process input text identified via the statistical detection model to extract statistical information therefrom for reconstructing statistical tests.
  • Figure 3 illustrates a conceptual diagram illustrating statistical consistency validation.
  • Figure 4 which illustrates a table depicting model performance for different statistical detection model configurations.
  • Figure 5 which illustrates a table depicting model performance of different statistical extraction models.
  • Figure 6 illustrates an example flow diagram depicting acts associated with the disclosed subject matter.
  • Figure 7 illustrates example components of a system that may comprise or implement aspects of one or more disclosed embodiments.
  • Detecting statistical problems in scientific documents presents several challenges.
  • One challenge is to detect whether statistical information is present in a section of text. This challenge is complicated by attribution issues. For instance, authors of a target publication often include a discussion of the results found in other works, which can give rise to the need to distinguish between the results asserted by author(s) of the target publication and the results from other works that are referenced in the target publication.
  • Another challenge is to extract the pieces necessary to recompute p-values (e.g., statistical information extraction). Authors report statistical results with considerable variation, which can necessitate accommodating such variability. Yet another challenge is determining whether the extracted information has inconsistencies (e.g., statistical consistency validation).
  • the disclosed subject matter is directed to addressing the issue of statistical mistakes (e.g., where statistical outcomes, such as p-values, are intentionally or unintentionally manipulated).
  • At least some disclosed embodiments are directed to a named-entity recognition strategy for identifying and/or quantifying statistical mistakes/problems in documents.
  • the presently disclosed name-entity recognition framework for detecting, extracting, and/or assessing consistency of statistical reporting is sometimes referred to herein as STATSNERD (Statistical Name-Entity Recognition Diagnostics).
  • the disclosed STATSNERD framework can include multiple tasks, including (i) statistical detection, (ii) statistical information extraction, and (iii) statistical consistencyvalidation.
  • the statistical detection task can comprise detecting whether a particular paragraph contains a statistical report or result related to the results of the target document (or the current document/paper). For instance, documents sometimes discuss results associated with references cited in the document, through such results are not of interest in assessing whether the results currently asserted by the document include statistical errors/mistakes.
  • the statistical information extraction task involves extracting information necessary to reconstruct a statistical test.
  • the statistical consistency validation involves determining the consistency of the statistical report or result depending on the reported p-value, computed p-value, and significance value.
  • Figure 1 illustrates a conceptual diagram 100 showing operational aspects of a statistical detection model 110.
  • Figure 1 depicts the statistical detection model 1 10 as being configured or adapted to receive, as an input, a document 120 and/or paragraphs 122 thereof.
  • the document 120 comprises a scientific or research paper.
  • the document 120 and/or its paragraphs 122 can be structured in any suitable manner and/or can be subjected to any suitable pre-processing operations in preparation for processing by the statistical detection model 110.
  • the statistical detection model 110 is configured to process the document 120 and provide output 130 (e.g., classifications/labels) indicating which of the paragraphs 122 of the document 120 include one or more statistical reports or results related to the results of the document 120.
  • output 130 e.g., classifications/labels
  • the statistical detection model 110 utilizes one or more classifiers which process feature outputs of one or more initial models to determine the output 130.
  • the classifier(s) can comprise or utilize, by way of non-limiting example, regularized logistic regression (LR), multi-layer perceptron (MLP), gradient boosting classifier (GBC), and/or others.
  • the initial model(s), whose outputs are processed by the classifier(s) to determine the output 130 can comprise one or more language models configured to process the document 120 (and/or the paragraphs 122 thereof) to estimate character and/or word frequencies, term frequencies, inverse document frequency (TF-IDF), etc. associated with the presence of statistical reports and/or values.
  • TF-IDF inverse document frequency
  • the initial model(s) can additionally or alternatively include one or more featurizers that determine the frequencies of p-values, test names, test statistics, degrees of freedom, and/or other components in the document 120 (and/or the paragraphs 122 thereof). For example, such components can be detected via the regular expressions in the R package STATSCHECK (or other packages/functions may be used).
  • the initial model(s) can additionally or alternatively include one or more embedding models configured to process the document 120 (and/or the paragraphs 122 thereof) to output text-based or document-based embeddings.
  • the statistical detection model 110 (indicating whether a given paragraphs 122 of a document 120 includes statistical analyses) can be characterized as follows: p (statistics present] pal a. ) where p represents the output 130 indicating whether a given paragraph includes statistical results/reports/analyses, f represents the classifier(s), char+word n-gram represents output of one or more language models for determining character, word, term, and/or inverse document frequency estimations, statistica] factorizer represents output of one or more featurizers that determine the frequencies of statistical components (e.g., p-values, test names, test statistics, degrees of freedom, etc.), and embeddings represents embedding output of one or more embedding models.
  • Figure 2 illustrates a conceptual diagram 200 showing operational aspects of a statistical extraction model 210, which can process paragraphs 220 (or any input text) identified via the statistical detection model 110 to extract statistical information therefrom for reconstructing statistical tests.
  • the statistical extraction model 210 can extract statistical information such as the test name (TN), sample size (SS), test statistics (TS), probability value or p-value (PV), and/or others.
  • the statistical extraction model 210 includes three layers: an embedding layer 230, a recurrent layer 240, and a structured prediction layer 250.
  • Figure 2 also depicts a tokenizer 222 that can be configured to tokenize the paragraphs 220 to provide tokens 224 for downstream processing (e.g., by the embedding layer 230).
  • a tokenizer 222 can be configured to tokenize the paragraphs 220 to provide tokens 224 for downstream processing (e.g., by the embedding layer 230).
  • statistical results/reporting in documents often includes specialized or unique characters, such as (i.e., the chi-square test), p (correlation), and other mathematical symbols.
  • Conventional natural language processing (NLP) models can thus fail to handle such out-of-vocabulary tokens in context.
  • the tokenizer 222 can comprise a custom statistical tokenizer tailored for parsing statistical reporting (one will appreciate that different tokenizers can be used in different subj ect matter domains to accommodate for different symbol/ character frameworks).
  • the tokenizer 222 can comprise a rule-based tokenizer that handles a diverse set of punctuations and symbols encountered in research contexts.
  • a conventional tokenizer might encounter the term “p-val ⁇ 0.05“ and treat the term as a single token.
  • the tokenizer 222 can treat such a term as three separate tokens, which can help statistical extraction significantly.
  • the tokenizer 222 can create token boundaries for all operators and mathematical symbols.
  • the embedding layer 230 can process the tokens 224 (i.e., tokenized text) to generate embeddings 232.
  • the embedding layer 230 can take on various forms, such as a stacked embedding layer that combines contextual word embeddings and contextual string embeddings.
  • the embedding layer 230 combines pre-trained BERT (Bidirectional Encoder Representations from Transformers) embeddings and contextual string embeddings (CSE) to leam syntactic and semantic features.
  • the statistical embeddings 232 can comprise a 1792-dimensional vector (e.g., a concatenation of BERT and CSE) as follows:
  • Wi w t CSE
  • the recurrent layer 240 can be configured to capture forward and backward context for tokens, thereby providing contextual features 242.
  • the recurrent layer 240 can be implemented as one or more Bi-LSTM (Bidirectional Long Short-Term Memory) layers, other RNN (Recurrent Neural Network) variants, transformer models, attention layers/models, and/or others.
  • the recurrent layer 240 can comprise two stacked LSTM units, each comprising 128 hidden layers.
  • the embeddings 232 generated via the embedding layer 230 can be passed into the recurrent layer 240 (e.g., a Bi-LSTM layer) to leam features and capture contextual information from both forward and backward directions.
  • Example mathematics for a recurrent layer 240 implemented as a Bi-LSTM layer are provided as follows:
  • x t represents the input at time step t
  • h t is the hidden state
  • c t is the cell state
  • i t , f t , and o t are the input, forget, and output gates, respectively.
  • the weights W and biases b are learnable parameters, and o denotes the sigmoid activation function while 0 represents element-wise multiplication.
  • the forward LSTM processes the input sequence from the beginning, while the backward LSTM processes it in reverse order.
  • the structured prediction layer 250 can be configured to leam the label dependencies for the contextual features 242 across both directions.
  • the structured prediction layer 250 can be implemented as a conditional random field (CRF) layer that models complex label dependencies and captures contextual information (though other prediction models may be used, such as transformer models, hidden Markov models, attention layers/models, multi-layer perceptron layers, etc.).
  • CRF decoding layer (as the structured prediction layer 250) can be implemented in conjunction with the Bi-LSTM layer (as the recurrent layer 240) to optimize the label sequence (e.g., label sequence 252) for a given input sequence (e.g., tokens 224).
  • the CRF can model the conditional probability 7 distribution of a label sequence y (e.g., label sequence 252), given the input sequence x (e.g., tokens 224), as a function of weights of features (e.g., contextual features 242):
  • Wj are the yveights learned of the k features (e.g., contextual features 242) of a token (e.g., tokens 224).
  • the conditional probability distribution can be normalized using the partition function Z(x).
  • the model can leam to classify each token (e.g., tokens 224) dependent on the label prediction scores of its surrounding tokens, thereby obtaining the label sequence 252 and achieving name-entity recognition (NER) for statistical terms in the paragraphs 220.
  • NER name-entity recognition
  • the recognized entities can provide a basis for extracting statistical information from the paragraphs 220 that can be used to reconstruct statistical tests to perform statistical consistency validation.
  • Figure 2 conceptually depicts an example annotated paragraph 260, where certain tokens of the annotated paragraph 260 are labeled based on output of the structured prediction layer 250 (e.g., implemented as a CRF layer) to indicate the entities or statistical information represented by the labeled tokens.
  • the structured prediction layer 250 e.g., implemented as a CRF layer
  • tokens of the annotated paragraph 260 that fall outside of any entity associated with statistical information are not labeled in Figure 2.
  • the statistical information may be extracted without determining a label sequence 252 and using the label sequence 252 to inform the extraction of the statistical information from the set of input text.
  • labels indicating the statistical information may be directly inferred/predicted using one or more artificial intelligence (Al) models by processing the input text, the tokens, the set of embeddings generated based on the input text, and/or the set of contextual features generated based on the set of embeddings.
  • Al artificial intelligence
  • FIG. 3 illustrates a conceptual diagram 300 illustrating statistical consistency validation, where statistical information 320 obtained via name-entity recognition (e.g., via statistical extraction model 210) is utilized by a statistical validation module 310.
  • the statistical validation module 310 can comprise a model- free component that operates based on statistical reporting principles or predefined rules. For instance, given the statistical information 320, the statistical validation module 310 can reverse engineer statistical tests (e.g., to determine the p-value) to provide one or more consistency labels 330 indicating whether a statistical report detected in a document (e.g., document 120) complies or is consistent with one or more standards, thresholds, or applicable statistical reporting principles.
  • Example consistency labels can include “consistent”, “inconsistent”, “grossly inconsistent’', variations thereon, and/or others.
  • the consistency of a statistical report detected in a document can be computed as follows: Consistent 8 p ⁇ k (4)
  • the statistical detection model 110 utilized a grid to cross-validate the best values between 1-2, 1-3, and 2-3 for the character and word n-grams.
  • STATCHECK from the R package was used as a featurizer, and SPECTER (SPECTER: Document-level Representation Learning using Citation-informed Transformers; Cohan, A.; Feldman, S.; Beltagy, I.; Downey. D.; and Weld, D. S. 2020. In ACL.) and the mini-LM (MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers; Wang, W.: Wei. F.; Dong, L.; Bao.
  • the dataset for statistical detection (e.g., for training, evaluation, validation, etc.) was obtained by randomly sampling 1,500 paragraphs from the PubMed Subset (PMOAS) dataset
  • the dataset for statistical extraction was obtained by annotating 7,135 paragraphs from PMOAS and from the SCORE project (Systematizing confidence in open research and evidence (SCORE); Alipourfard, N.; Arendt, B.; Benjamin, D. J.; Benkler, N.; Bishop, M. M.; Burstein, M.; Bush. M.i Caverlee, J.; Chen, Y.; and Clark, C. 2021. Technical Report, Center for Open Science.)
  • the dataset comprised a combination of diverse variations encountered in statistical reporting.
  • each sentence was converted into an explicitly used sequence labeling format, the IOB format (Text chunking using transformationbased learning; Ramshaw. L.
  • Results were aggregated at the article level in two ways. First, the average number of incomplete statistics within an article was computed. Second, the statistical inconsistencies present in an article were analyzed (per Equation 4). It was determined that 65% of the documents contained at least one incomplete statistic, 64.6% had all statistics consistent (35.3% contained at least one inconsistent statistic), and 0.88% contained a grossly inconsistent statistic.
  • Disclosed embodiments are directed to techniques for detecting statistical errors in scientific documents, papers, publications, etc.
  • the disclosed techniques rely at least in part on a named-entity recognition approach to detect, extract, and assess whether statistical tests are consistent.
  • the disclosed data-driven model can allow for capture of significantly more variability in how scientists report their results, including other kinds of statistical tests and variations beyond how the APA suggests to report results.
  • the disclosed STATSNERD framework (including statistical detection, statistical extraction, and statistical validation components) can be implemented to identify statistical inconsistencies in scientific documents and can enhance scientific rigor by ensuring statistical accuracy in research and/or other publications, papers, documents, etc.
  • Example Method(s) can be implemented to identify statistical inconsistencies in scientific documents and can enhance scientific rigor by ensuring statistical accuracy in research and/or other publications, papers, documents, etc.
  • FIG. 6 illustrates an example flow diagram 600 depicting acts associated with the disclosed subject matter.
  • the operations depicted in flow diagram 600 may be performed using one or more components of a system 700 described hereinafter, such as processor(s) 702, storage 704, sensor(s) 706.
  • Act 602 of flow diagram 600 includes accessing a set of input text.
  • the set of input text (e.g., paragraph(s) 220) is determined by processing an input document (e.g., document 120) using a statistical detection model (e.g., statistical detection model 110) trained to identify sets of text that include statistics (e.g., output 130).
  • the statistical detection model comprises a classifier model configured to process feature output of one or more initial models.
  • the classifier model comprises a logistic regression model, a multi-layer perceptron, or a gradient boosting classifier.
  • the one or more initial models include one or more of: one or more language models, one or more featurizers.
  • Act 604 of flow diagram 600 includes generating a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model (e.g., statistical extraction model 210).
  • the embedding layer e.g., embedding layer 230
  • the set of embeddings e.g., embeddings 232
  • the set of tokens is generated using a tokenizer (e.g., tokenizer 222) that creates token boundaries for mathematical symbols and operators.
  • the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings.
  • Act 606 of flow diagram 600 includes generating a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model.
  • the recurrent layer e.g., recurrent layer 240
  • the recurrent layer comprises a bidirectional long shortterm memory layer.
  • a system for validating statistical information using name-entity recognition comprising: one or more processors; and one or more computer-readable recording media that store instructions that are executable by the one or more processors to configure the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model: generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label
  • Clause 2 The system of clause 1, wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics.
  • Clause 3 The system of clause 2, wherein the statistical detection model comprises a classifier model configured to process feature output of one or more initial models.
  • Clause 4 The system of clause 3, wherein the classifier model comprises a logistic regression model, a multi-layer perceptron, or a gradient boosting classifier.
  • the one or more initial models include one or more of: one or more language models, one or more featurizers, or one or more embedding models.
  • Clause 7 The system of clause 6, wherein the set of tokens is generated using a tokenizer that creates token boundaries for mathematical symbols and operators.
  • Clause 10 The system of clause 1, wherein the structured prediction layer comprises a conditional random field layer.
  • Clause 11 The system of clause 1, wherein the statistical information indicated by the label sequence comprises one or more of: test name, sample size, test statistics, or probabilityvalue.
  • a method for validating statistical information using name-entity recognition comprising: accessing a set of input text; generating a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generating a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generating a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstructing a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generating a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
  • Clause 15 The method of clause 14, wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics.
  • Clause 16 The method of clause 14, wherein the embedding layer generates the set of embeddings by processing a set of tokens generated using the set of input text.
  • Clause 17 The method of clause 14, wherein the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings.
  • Clause 18 The method of clause 14, wherein the recurrent layer comprises a bidirectional long short-term memory layer.
  • Clause 19 The method of clause 14, wherein the structured prediction layer comprises a conditional random field layer.
  • One or more computer-readable recording media that store instructions that are executable by one or more processors of a system to validating statistical information using name-entity recognition by configuring the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
  • Figure 7 illustrates example components of a system 700 that may comprise or implement aspects of one or more disclosed embodiments.
  • Figure 7 illustrates an implementation in which the system 700 includes processor(s) 702, storage 704, sensor(s) 706, I/O system(s) 708, and communication system(s) 710.
  • Figure 7 illustrates a system 700 as including particular components, one will appreciate, in view of the present disclosure, that a system 700 may comprise any number of additional or alternative components.
  • the processor(s) 702 may comprise one or more sets of electronic circuitries that include any number of logic units, registers, and/or control units to facilitate the execution of computer-readable instructions (e.g., instructions that form a computer program). Such computer- readable instructions may be stored within storage 704.
  • the storage 704 may comprise physical system memory or computer-readable recording media and may be volatile, non-volatile, or some combination thereof.
  • storage 704 may comprise local storage, remote storage (e.g., accessible via communication system(s) 710 or otherwise), or some combination thereof. Additional details related to processors (e.g., processor(s) 702) and computer storage media (e.g., storage 704) will be provided hereinafter.
  • the processor(s) 702 may be configured to execute instructions stored within storage 704 to perform certain actions. In some instances, the actions may rely at least in part on communication sy stem(s) 710 for receiving data from remote sy stem(s) 712, which may include, for example, separate systems or computing devices, sensors, and/or others.
  • the communications system(s) 710 may comprise any combination of software or hardware components that are operable to facilitate communication between on-system components/devices and/or with off-system components/devices.
  • the communications system(s) 710 may comprise ports, buses, or other physical connection apparatuses for communicating with other devices/components.
  • the communications system(s) 710 may comprise systems/components operable to communicate wirelessly with external systems and/or devices through any suitable communication channel (s), such as, by way of non-limiting example, Bluetooth, ultra- wideband, WLAN, infrared communication, and/or others.
  • any suitable communication channel such as, by way of non-limiting example, Bluetooth, ultra- wideband, WLAN, infrared communication, and/or others.
  • Figure 7 illustrates that a system 700 may comprise or be in communication with sensor(s) 706.
  • Sensor(s) 706 may comprise any device for capturing or measuring data representative of perceivable phenomenon.
  • the sensor(s) 706 may comprise one or more antennae, monopoles, image sensors, microphones, thermometers, barometers, magnetometers, accelerometers, gyroscopes, and/or others.
  • Computing system functionality can be enhanced by a computing systems’ ability 7 to be interconnected to other computing systems via network connections.
  • Network connections may include, but are not limited to, connections via wired or wireless Ethernet, cellular connections, or even computer to computer connections through serial, parallel, USB, or other connections.
  • the connections allow a computing system to access services at other computing systems and to quickly and efficiently receive application data from other computing systems.
  • Interconnection of computing systems has facilitated distributed computing systems, such as so-called "cloud” computing systems.
  • cloud computing may be systems or resources for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, etc.) that can be provisioned and released with reduced management effort or service provider interaction.
  • a cloud model can be composed of various characteristics (e.g., on-demand self- service, broad network access, resource pooling, rapid elasticity, measured service, etc), service models (e.g., Software as a Service (“SaaS”), Platform as a Sendee (“PaaS”), Infrastructure as a Service (“laaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc ).
  • Cloud and remote based service applications are prevalent. Such applications are hosted on public and private remote systems such as clouds and usually offer a set of web based services for communicating back and forth with clients.
  • computers are intended to be used by direct user interaction with the computer. As such, computers have input hardware and software user interfaces to facilitate user interaction.
  • a modem general purpose computer may include a keyboard, mouse, touchpad, camera, etc. for allowing a user to input data into the computer.
  • various software user interfaces may be available.
  • Examples of software user interfaces include graphical user interfaces, text command line based user interface, function key or hot key user interfaces, and the like.
  • Disclosed embodiments may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below.
  • Disclosed embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures.
  • Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system.
  • Computer-readable media that store computer-executable instructions are physical storage media.
  • Physical computer-readable storage media includes RAM, ROM, EEPROM, CD- ROM or other optical disk storage (such as CDs, DVDs, etc), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • a “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
  • a network or another communications connection can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media.
  • program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer-readable media to physical computer-readable storage media (or vice versa).
  • program code means in the form of computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC’”), and then eventually transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system.
  • NIC network interface module
  • computer-readable physical storage media can be included in computer system components that also (or even primarily) utilize transmission media.
  • Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like.
  • the invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • the functionality described herein can be performed, at least in part, by one or more hardware logic components.
  • illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs). System-on-a-chip systems (SOCs). Complex Programmable Logic Devices (CPLDs), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computational Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

A system for validating statistical information using name-entity recognition is configurable to: generate a set of embeddings based on a set of input text using an embedding layer of a statistical extraction model; generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.

Description

SYSTEM FOR EXTRACTING AND QUANTIFYING STATISTICAL PROBLEMS IN DOCUMENTS
CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of and priority to United States Provisional Patent
Application Serial No. 63/538,344 filed on 14 September 2023 and entitled "‘SYSTEM FOR EXTRACTING AND QUANTIFYING STATISTICAL PROBLEMS IN DOCUMENTS,” which application is expressly incorporated herein by reference in its entirety. BACKGROUND
[0002] Statistical hypothesis testing in science null-hypothesis significance testing (NHST) is widely used in scientific decision-making. NHST involves testing a null hypothesis, which assumes no relationship or difference betw een variables, against an alternative hypothesis, which suggests the presence of a relationship or difference. The default threshold for statistical significance in NHST is typically set at p < 0.05, indicating that the observed results are unlikely to occur by chance alone. NHST continues to be widely used in scientific research, including fields such as psychology' and biology.
[0003] Given the prevalence of NHST, many scientists are often advertently or inadvertently tempted to change p-values to be below their established threshold. Typically, an NHST is reported in the text with several pieces of information besides the p-value, such as the test name, sample size, and test statistics. If any of the pieces of the test are manipulated, the rest of the statistical information can be used to “reverse-engineer.” Since the p-value is so central to deciding whether a hypothesis is rejected or accepted, the p-value is usually the focus of analyses. If there is a large discrepancy between the reported p-value and the recomputed one, there is suspicion of an error, mistake, or manipulation.
[0004] Furthermore, p-curves (e.g., distributions of statistically significant p-values) can be unreliable in distinguishing true effects from mistakes in observational research. Various tests for detecting such mistakes have been developed.
[0005] Concerns exist about how such mistakes, errors, and/or manipulations can impede scientific progress. Various studies have shown significant prevalence of questionable research practices and statistical misreporting. Accordingly, there exists a need for improved techniques for detecting statistical errors.
[0006] The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
SUMMARY [0007] In statistics, results such as p-values are sometimes inadvertently incorrect - or often intentionally modified to make them fit a hypothesis. These are surprisingly common issues that reviewers and editors find hard to detect manually. Disclosed embodiments include a named- entity recognition approach for identify ing, extracting, and judging statistical results from scientific or other texts. [0008] Disclosed embodiments include computer systems and methods for analyzing documents with a natural language processor (NLP). The NLP may utilize artificial intelligence to identity statistical data and/or other mathematical data. The NLP can extract the data from the documents and then the data can be validated to determine if the data is correct or internally consistent. In at least one embodiment, the data is analyzed and a probability of reliability is generated. For example, disclosed embodiments may determine that a particular set of data is only
13% reliable based upon a statistical analysis of the data as presented. Additionally or alternatively, disclosed embodiment may identify data that is missing from documents. For example, disclosed embodiments may determine that validation is not possible because a particular subset of the data was not provided. [0009] Embodiments can detect, parse, and quantify the quality of statistical reports in documents with a significantly wider range of tests and with a significantly higher quality than conventional approaches.
[0010] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify' key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
[0011] Additional features and advantages will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the teachings herein. Features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. Features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS
[0012] In order to describe the manner in which the above-recited and other advantages and features can be obtained, a more particular description of the subject matter briefly described above will be rendered by reference to specific embodiments which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments and are not therefore to be considered to be limiting in scope, embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: [0013] Figure 1 illustrates a conceptual diagram showing operational aspects of a statistical detection model. [0014] Figure 2 illustrates a conceptual diagram showing operational aspects of a statistical extraction model, which can process input text identified via the statistical detection model to extract statistical information therefrom for reconstructing statistical tests.
[0015] Figure 3 illustrates a conceptual diagram illustrating statistical consistency validation.
[0016] Figure 4 which illustrates a table depicting model performance for different statistical detection model configurations.
[0017] Figure 5 which illustrates a table depicting model performance of different statistical extraction models.
[0018] Figure 6 illustrates an example flow diagram depicting acts associated with the disclosed subject matter. [0019] Figure 7 illustrates example components of a system that may comprise or implement aspects of one or more disclosed embodiments.
DETAILED DESCRIPTION
[0020] Statistical errors, where researchers make mistakes or manipulate data or statistical analyses to achieve significant results, regularly occur in various disciplines and affect integrity.
Many disciplines rely on statistics to make claims about their results, where the "p- value" is central to accepting or rejecting the hypothesis put forth by the authors. Statistical errors in this context are usually referred to as ”p-hacking." While understanding has been established of how empirical and theoretical p-values distributions should behave across publications, there is a lack of efficient and effective methods to parse statistical tests in publications.
[0021] Detecting statistical problems in scientific documents presents several challenges. One challenge is to detect whether statistical information is present in a section of text. This challenge is complicated by attribution issues. For instance, authors of a target publication often include a discussion of the results found in other works, which can give rise to the need to distinguish between the results asserted by author(s) of the target publication and the results from other works that are referenced in the target publication.
[0022] Another challenge is to extract the pieces necessary to recompute p-values (e.g., statistical information extraction). Authors report statistical results with considerable variation, which can necessitate accommodating such variability. Yet another challenge is determining whether the extracted information has inconsistencies (e.g., statistical consistency validation).
[0023] The disclosed subject matter is directed to addressing the issue of statistical mistakes (e.g., where statistical outcomes, such as p-values, are intentionally or unintentionally manipulated). At least some disclosed embodiments are directed to a named-entity recognition strategy for identifying and/or quantifying statistical mistakes/problems in documents. For ease of reference, the presently disclosed name-entity recognition framework for detecting, extracting, and/or assessing consistency of statistical reporting is sometimes referred to herein as STATSNERD (Statistical Name-Entity Recognition Diagnostics).
[0024] The disclosed STATSNERD framework can include multiple tasks, including (i) statistical detection, (ii) statistical information extraction, and (iii) statistical consistencyvalidation. The statistical detection task can comprise detecting whether a particular paragraph contains a statistical report or result related to the results of the target document (or the current document/paper). For instance, documents sometimes discuss results associated with references cited in the document, through such results are not of interest in assessing whether the results currently asserted by the document include statistical errors/mistakes. The statistical information extraction task involves extracting information necessary to reconstruct a statistical test. The statistical consistency validation involves determining the consistency of the statistical report or result depending on the reported p-value, computed p-value, and significance value.
Detection. Extraction, and Validation of Statistical Reports [0025] The following discussion provides details for an example model for statistical detection, the architecture of an example statistical information extraction model, and an example statistical consistency validation method.
[0026] Figure 1 illustrates a conceptual diagram 100 showing operational aspects of a statistical detection model 110. Figure 1 depicts the statistical detection model 1 10 as being configured or adapted to receive, as an input, a document 120 and/or paragraphs 122 thereof. In one example, the document 120 comprises a scientific or research paper. One will appreciate that the document 120 and/or its paragraphs 122 can be structured in any suitable manner and/or can be subjected to any suitable pre-processing operations in preparation for processing by the statistical detection model 110. Although the examples described herein focus, in at least some respects, on processing of a document 120 and/or paragraphs thereof, the disclosed embodiments can be used to process any set of input text/characters. The statistical detection model 110 is configured to process the document 120 and provide output 130 (e.g., classifications/labels) indicating which of the paragraphs 122 of the document 120 include one or more statistical reports or results related to the results of the document 120.
[0027] In one embodiment, the statistical detection model 110 utilizes one or more classifiers which process feature outputs of one or more initial models to determine the output 130. The classifier(s) can comprise or utilize, by way of non-limiting example, regularized logistic regression (LR), multi-layer perceptron (MLP), gradient boosting classifier (GBC), and/or others. The initial model(s), whose outputs are processed by the classifier(s) to determine the output 130, can comprise one or more language models configured to process the document 120 (and/or the paragraphs 122 thereof) to estimate character and/or word frequencies, term frequencies, inverse document frequency (TF-IDF), etc. associated with the presence of statistical reports and/or values. The initial model(s) can additionally or alternatively include one or more featurizers that determine the frequencies of p-values, test names, test statistics, degrees of freedom, and/or other components in the document 120 (and/or the paragraphs 122 thereof). For example, such components can be detected via the regular expressions in the R package STATSCHECK (or other packages/functions may be used). The initial model(s) can additionally or alternatively include one or more embedding models configured to process the document 120 (and/or the paragraphs 122 thereof) to output text-based or document-based embeddings.
[0028] In one example, the statistical detection model 110 (indicating whether a given paragraphs 122 of a document 120 includes statistical analyses) can be characterized as follows: p (statistics present] pal a. )
Figure imgf000006_0001
where p represents the output 130 indicating whether a given paragraph includes statistical results/reports/analyses, f represents the classifier(s), char+word n-gram represents output of one or more language models for determining character, word, term, and/or inverse document frequency estimations, statistica] factorizer represents output of one or more featurizers that determine the frequencies of statistical components (e.g., p-values, test names, test statistics, degrees of freedom, etc.), and embeddings represents embedding output of one or more embedding models.
[0029] Figure 2 illustrates a conceptual diagram 200 showing operational aspects of a statistical extraction model 210, which can process paragraphs 220 (or any input text) identified via the statistical detection model 110 to extract statistical information therefrom for reconstructing statistical tests. For example, the statistical extraction model 210 can extract statistical information such as the test name (TN), sample size (SS), test statistics (TS), probability value or p-value (PV), and/or others.
[0030] In the example shown in Figure 2, the statistical extraction model 210 includes three layers: an embedding layer 230, a recurrent layer 240, and a structured prediction layer 250. Figure 2 also depicts a tokenizer 222 that can be configured to tokenize the paragraphs 220 to provide tokens 224 for downstream processing (e.g., by the embedding layer 230). For example, statistical results/reporting in documents often includes specialized or unique characters, such as
Figure imgf000007_0001
(i.e., the chi-square test), p (correlation), and other mathematical symbols. Conventional natural language processing (NLP) models can thus fail to handle such out-of-vocabulary tokens in context. Accordingly, the tokenizer 222 can comprise a custom statistical tokenizer tailored for parsing statistical reporting (one will appreciate that different tokenizers can be used in different subj ect matter domains to accommodate for different symbol/ character frameworks). For instance, the tokenizer 222 can comprise a rule-based tokenizer that handles a diverse set of punctuations and symbols encountered in research contexts. As an illustrative example, a conventional tokenizer might encounter the term “p-val < 0.05“ and treat the term as a single token. However, the tokenizer 222 can treat such a term as three separate tokens, which can help statistical extraction significantly. Further, in some implementations, the tokenizer 222 can create token boundaries for all operators and mathematical symbols.
[0031] The embedding layer 230 can process the tokens 224 (i.e., tokenized text) to generate embeddings 232. The embedding layer 230 can take on various forms, such as a stacked embedding layer that combines contextual word embeddings and contextual string embeddings. In one example, the embedding layer 230 combines pre-trained BERT (Bidirectional Encoder Representations from Transformers) embeddings and contextual string embeddings (CSE) to leam syntactic and semantic features. In one example implementation, the statistical embeddings 232 can comprise a 1792-dimensional vector (e.g., a concatenation of BERT and CSE) as follows:
_ (2)
Wi = wt CSE
[0032] The recurrent layer 240 can be configured to capture forward and backward context for tokens, thereby providing contextual features 242. For instance, the recurrent layer 240 can be implemented as one or more Bi-LSTM (Bidirectional Long Short-Term Memory) layers, other RNN (Recurrent Neural Network) variants, transformer models, attention layers/models, and/or others. In one example implementation, the recurrent layer 240 can comprise two stacked LSTM units, each comprising 128 hidden layers. The embeddings 232 generated via the embedding layer 230 can be passed into the recurrent layer 240 (e.g., a Bi-LSTM layer) to leam features and capture contextual information from both forward and backward directions. Example mathematics for a recurrent layer 240 implemented as a Bi-LSTM layer are provided as follows:
Forward LSTM:
Figure imgf000008_0003
Backward LSTM:
Figure imgf000008_0001
Where, xt represents the input at time step t, ht is the hidden state, ct is the cell state, it, ft, and ot are the input, forget, and output gates, respectively. The weights W and biases b are learnable parameters, and o denotes the sigmoid activation function while 0 represents element-wise multiplication. The forward LSTM processes the input sequence from the beginning, while the backward LSTM processes it in reverse order.
[0033] The structured prediction layer 250 can be configured to leam the label dependencies for the contextual features 242 across both directions. For instance, the structured prediction layer 250 can be implemented as a conditional random field (CRF) layer that models complex label dependencies and captures contextual information (though other prediction models may be used, such as transformer models, hidden Markov models, attention layers/models, multi-layer perceptron layers, etc.). A CRF decoding layer (as the structured prediction layer 250) can be implemented in conjunction with the Bi-LSTM layer (as the recurrent layer 240) to optimize the label sequence (e.g., label sequence 252) for a given input sequence (e.g., tokens 224). The CRF can model the conditional probability7 distribution of a label sequence y (e.g., label sequence 252), given the input sequence x (e.g., tokens 224), as a function of weights of features (e.g., contextual features 242):
Figure imgf000008_0002
Where, Wj are the yveights learned of the k features (e.g., contextual features 242) of a token (e.g., tokens 224). The conditional probability distribution can be normalized using the partition function Z(x). Using the CRF layer, the model can leam to classify each token (e.g., tokens 224) dependent on the label prediction scores of its surrounding tokens, thereby obtaining the label sequence 252 and achieving name-entity recognition (NER) for statistical terms in the paragraphs 220. The recognized entities can provide a basis for extracting statistical information from the paragraphs 220 that can be used to reconstruct statistical tests to perform statistical consistency validation. For instance, Figure 2 conceptually depicts an example annotated paragraph 260, where certain tokens of the annotated paragraph 260 are labeled based on output of the structured prediction layer 250 (e.g., implemented as a CRF layer) to indicate the entities or statistical information represented by the labeled tokens. For clarity, tokens of the annotated paragraph 260 that fall outside of any entity associated with statistical information (as determined via the structured prediction layer 250) are not labeled in Figure 2. By way of illustrative example, in the annotated paragraph 260 of Figure 2, the tokens forming '‘t-test” are labeled with TN (indicating test name), the tokens forming “59” are labeled with SS (indicating sample size), the tokens forming “1(58) = 2.45” are labeled with TS (indicating test statistic), and the tokens forming “p = 0.018” are labeled with PV (indicating probability value or p-value).
[0034] Although the present example(s) focus, in at least some respects, on the statistical information being extracted using a label sequence 252, the statistical information may be extracted without determining a label sequence 252 and using the label sequence 252 to inform the extraction of the statistical information from the set of input text. For instance, labels indicating the statistical information may be directly inferred/predicted using one or more artificial intelligence (Al) models by processing the input text, the tokens, the set of embeddings generated based on the input text, and/or the set of contextual features generated based on the set of embeddings.
[0035] As noted above, statistical consistency validation may be performed using the statistical information extracted via the statistical extraction model 210. Figure 3 illustrates a conceptual diagram 300 illustrating statistical consistency validation, where statistical information 320 obtained via name-entity recognition (e.g., via statistical extraction model 210) is utilized by a statistical validation module 310. The statistical validation module 310 can comprise a model- free component that operates based on statistical reporting principles or predefined rules. For instance, given the statistical information 320, the statistical validation module 310 can reverse engineer statistical tests (e.g., to determine the p-value) to provide one or more consistency labels 330 indicating whether a statistical report detected in a document (e.g., document 120) complies or is consistent with one or more standards, thresholds, or applicable statistical reporting principles. Example consistency labels can include “consistent”, “inconsistent”, “grossly inconsistent’', variations thereon, and/or others. By way of illustration, the consistency of a statistical report detected in a document can be computed as follows: Consistent 8p < k (4)
Inconsistent 8p > k
Figure imgf000010_0001
Grossly inconsistent 8p > k /\ pc > a / pr < a where pr and pc are the reported p-value and the computed (i.e.. reverse engineered) p-value, respectively, and Sp is \pr — pc\. The threshold k is the value over which a discrepancy is considered inconsistent, and a is the significant threshold for the NHST. Example values can comprise k = 0.001, and a = 0.05.
Experimental Results
[0036] To provide sufficient validation of the techniques, methods, and principles described herein, experimental results were obtained. It shall be noted that these experimental results and the experiment(s) that yielded the results are provided by way of illustration and were performed under specific conditions using a specific embodiment or embodiments; accordingly, neither these experiments nor their results shall be used to limit the scope of the present disclosure.
[0037] In the experiments detailed herein, the statistical detection model 110 utilized a grid to cross-validate the best values between 1-2, 1-3, and 2-3 for the character and word n-grams. STATCHECK from the R package was used as a featurizer, and SPECTER (SPECTER: Document-level Representation Learning using Citation-informed Transformers; Cohan, A.; Feldman, S.; Beltagy, I.; Downey. D.; and Weld, D. S. 2020. In ACL.) and the mini-LM (MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers; Wang, W.: Wei. F.; Dong, L.; Bao. H.; Yang, N.; and Zhou. M. 2020. Advances in Neural Information Processing Systems, 33: 5776-5788.) were used as embedding models. For the LR models, the optimal inverse of regularization strength w as found in 0.01, 0.1, 0.3, 0.5, 1, 10, and 100. For the MLP, one hidden layer was cross-validated with 50, 100, and 150 neurons. For the GBC model, 50, 100, and 500 estimators were cross-validated.
[0038] The dataset for statistical detection (e.g., for training, evaluation, validation, etc.) was obtained by randomly sampling 1,500 paragraphs from the PubMed Subset (PMOAS) dataset
(National Library of Medicine 2003), which included deposited full-text information, containing 4.75 million documents (June 2023). The 1,500 paragraphs were randomly sampled from PMOAS documents using XPath queries and filtering (e.g.. sampling of paragraphs inside of tables, captions, reviews, and notes was avoided). An annotator labeled whether a paragraph contained statistics corresponding to “at least a p-value and group comparisons, and optionally test name, degrees of freedom, test statistics, among others'’ or not. The agreement was validated on 498 paragraphs with a second labeler and achieved excellent agreement (Cohen's K = 0.81). The total number of labeled paragraphs was 1,970, with 413 labeled as having statistics (21%).
[0039] In the experiments detailed herein, in the training phase for the statistical extraction model 210, labeled paragraphs were split into batches of size 32. Training occurred for 25 epochs on an NVIDIA Al 00 80G GPU and a separate validation set was used for early stopping (patience = 3). The learning rate for the SGD (stochastic gradient descent) optimizer was set to 0.01 with an anneal factor of 0.5. A dropout of 0.3 was applied to prevent overfitting on the training data. The number of hidden layers of the Bi-LSTM was set to 128. After forward propagation, the Viterbi Loss was calculated for each batch, which is the negative log-likelihood of the most likely label sequence, and the average for each batch was minimized, as follows:
Figure imgf000011_0001
[0040] The dataset for statistical extraction (e.g., for training, evaluation, validation, etc.) was obtained by annotating 7,135 paragraphs from PMOAS and from the SCORE project (Systematizing confidence in open research and evidence (SCORE); Alipourfard, N.; Arendt, B.; Benjamin, D. J.; Benkler, N.; Bishop, M. M.; Burstein, M.; Bush. M.i Caverlee, J.; Chen, Y.; and Clark, C. 2021. Technical Report, Center for Open Science.) The dataset comprised a combination of diverse variations encountered in statistical reporting. Next, each sentence was converted into an explicitly used sequence labeling format, the IOB format (Text chunking using transformationbased learning; Ramshaw. L. A.; and Marcus, M. P. 1999. Natural language processing using very large corpora, 157-176.) Every token was labeled from the list of B-XXX. I-XXX, and O. Here, B-XXX represents the beginning token of an entity of type XXX, and I-XXX indicates that the token is inside a multi-label spanning entity. O w as used to tags tokens lying outside of all named entities. 594 Test Names, 459 Sample Sizes, 3467 Test Statistics, and 5,287 p-value labels (spans) were obtained.
[0041] The performance of the statistical detection model 110 was assessed using a nested cross-validation procedure where the outer validation averaged the test Fl scores across five folds. The inner cross-validation served to select the optimal parameter for each model class using a three-fold cross-validation with a grouped stratified cross-validation that avoided sharing annotations from different annotators for the same paragraph. The highest performing model was the Gradient Boosting Classifier with char and word n-grams frequencies, achieving an Fl score of 0.91, precision of 0.89, and recall of 0.93, as indicated in Figure 4, which illustrates a table depicting model performance for different statistical detection model configurations (+- standard errors of the mean). [0042] The stability of the statistical extraction model 210 was assessed using a stratified fivefold cross-validation process. The different folds maintained the class distribution, and the performances were aggregated across several folds. The STATSNERD model of the present disclosure exhibited the highest Fl score of 0.95, as indicated in Figure 5, which illustrates a table depicting model performance of different statistical extraction models (+- standard errors of the mean).
[0043] 11.9 million statistical tests were detected across the 4.47 million PMOAS full-text documents. The possibility of verifying the p-values reported in these statistical tests (i.e., the ability to reverse engineer the p-value) was assessed. It was determined that verry high percentage of the statistical tests did not have sufficient information to verify the p-value.
[0044] Results were aggregated at the article level in two ways. First, the average number of incomplete statistics within an article was computed. Second, the statistical inconsistencies present in an article were analyzed (per Equation 4). It was determined that 65% of the documents contained at least one incomplete statistic, 64.6% had all statistics consistent (35.3% contained at least one inconsistent statistic), and 0.88% contained a grossly inconsistent statistic.
[0045] Disclosed embodiments are directed to techniques for detecting statistical errors in scientific documents, papers, publications, etc. The disclosed techniques rely at least in part on a named-entity recognition approach to detect, extract, and assess whether statistical tests are consistent. The disclosed data-driven model can allow for capture of significantly more variability in how scientists report their results, including other kinds of statistical tests and variations beyond how the APA suggests to report results.
[0046] The disclosed techniques can implement custom tokenizers. It was found that standard transformer-based architectures for science did not enjoy good performance for both statistical detection and statistical extraction tasks. For instance, for the statistical detection task (Figure 4), SPECTER had a low Fl = 0.66 performance compared to Fl = 0.91 for the GBC model using char and word n-grams. Similarly, for the statistical extraction task, BERT alone had Fl = 0.47 vs. STATSNERD with Fl = 0.95. The improved performance may be attributable to the custom features based on statistical reporting practices and modified standard tokenizers using similar concepts. [0047] The disclosed STATSNERD framework (including statistical detection, statistical extraction, and statistical validation components) can be implemented to identify statistical inconsistencies in scientific documents and can enhance scientific rigor by ensuring statistical accuracy in research and/or other publications, papers, documents, etc. Example Method(s)
[0048] The following discussion now refers to a number of methods and method acts that may be performed. Although the method acts may be discussed in a certain order or illustrated in a flow chart as occurring in a particular order, no particular ordering is required unless specifically stated, or required because an act is dependent on another act being completed prior to the act being performed.
[0049] Figure 6 illustrates an example flow diagram 600 depicting acts associated with the disclosed subject matter. The operations depicted in flow diagram 600 may be performed using one or more components of a system 700 described hereinafter, such as processor(s) 702, storage 704, sensor(s) 706. I/O system(s) 708, communication system(s) 710, etc.
[0050] Act 602 of flow diagram 600 includes accessing a set of input text. In some instances, the set of input text (e.g., paragraph(s) 220) is determined by processing an input document (e.g., document 120) using a statistical detection model (e.g., statistical detection model 110) trained to identify sets of text that include statistics (e.g., output 130). In some implementations, the statistical detection model comprises a classifier model configured to process feature output of one or more initial models. In some embodiments, the classifier model comprises a logistic regression model, a multi-layer perceptron, or a gradient boosting classifier. In some examples, the one or more initial models include one or more of: one or more language models, one or more featurizers. or one or more embedding models. [0051] Act 604 of flow diagram 600 includes generating a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model (e.g., statistical extraction model 210). In some instances, the embedding layer (e.g., embedding layer 230) generates the set of embeddings (e.g., embeddings 232) by processing a set of tokens (e g., tokens 224) generated using the set of input text. In some implementations, the set of tokens is generated using a tokenizer (e.g., tokenizer 222) that creates token boundaries for mathematical symbols and operators. In some embodiments, the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings.
[0052] Act 606 of flow diagram 600 includes generating a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model. In some examples, the recurrent layer (e.g., recurrent layer 240) comprises a bidirectional long shortterm memory layer.
[0053] Act 608 of flow diagram 600 includes generating a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result. In some instances, the structured prediction layer (e.g., structured prediction layer 250) comprises a conditional random field layer. In some implementations, the statistical information (e.g., statistical information 320) indicated by the label sequence (e.g., label sequence 252) comprises one or more of: test name, sample size, test statistics, or probability value. In some embodiments the reported statistical result comprises a reported p-value.
[0054] Act 610 of flow diagram 600 includes reconstructing a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result. In some examples, the computed statistical result comprises a computed p-value. [0055] Act 612 of flow diagram 600 includes generating a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result. In some instances, the consistency label (e.g., consistency label(s) 330) comprises one of: consistent, inconsistent, or grossly inconsistent.
[0056] Embodiments disclosed herein can include those in the following numbered clauses: [0057] Clause 1. A system for validating statistical information using name-entity recognition, the system comprising: one or more processors; and one or more computer-readable recording media that store instructions that are executable by the one or more processors to configure the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model: generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
[0058] Clause 2. The system of clause 1, wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics.
[0059] Clause 3. The system of clause 2, wherein the statistical detection model comprises a classifier model configured to process feature output of one or more initial models.
[0060] Clause 4. The system of clause 3, wherein the classifier model comprises a logistic regression model, a multi-layer perceptron, or a gradient boosting classifier. [0061] Clause 5. The system of clause 3, wherein the one or more initial models include one or more of: one or more language models, one or more featurizers, or one or more embedding models.
[0062] Clause 6. The system of clause 1, wherein the embedding layer generates the set of embeddings by processing a set of tokens generated using the set of input text.
[0063] Clause 7. The system of clause 6, wherein the set of tokens is generated using a tokenizer that creates token boundaries for mathematical symbols and operators.
[0064] Clause 8. The system of clause 1, wherein the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings. [0065] Clause 9. The system of clause 1. wherein the recurrent layer comprises a bidirectional long short-term memory layer.
[0066] Clause 10. The system of clause 1, wherein the structured prediction layer comprises a conditional random field layer.
[0067] Clause 11. The system of clause 1, wherein the statistical information indicated by the label sequence comprises one or more of: test name, sample size, test statistics, or probabilityvalue.
[0068] Clause 12. The system of clause 1, wherein the reported statistical result comprises a reported p-value, and wherein the computed statistical result comprises a computed p-value.
[0069] Clause 13. The system of clause 1, wherein the consistency label comprises one of: consistent, inconsistent, or grossly inconsistent.
[0070] Clause 14. A method for validating statistical information using name-entity recognition, the method comprising: accessing a set of input text; generating a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generating a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generating a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstructing a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generating a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
[0071] Clause 15. The method of clause 14, wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics. [0072] Clause 16. The method of clause 14, wherein the embedding layer generates the set of embeddings by processing a set of tokens generated using the set of input text.
[0073] Clause 17. The method of clause 14, wherein the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings. [0074] Clause 18. The method of clause 14, wherein the recurrent layer comprises a bidirectional long short-term memory layer.
[0075] Clause 19. The method of clause 14, wherein the structured prediction layer comprises a conditional random field layer.
[0076] Clause 20. One or more computer-readable recording media that store instructions that are executable by one or more processors of a system to validating statistical information using name-entity recognition by configuring the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
Additional Details Related to Implementing the Disclosed Embodiments
[0077] Figure 7 illustrates example components of a system 700 that may comprise or implement aspects of one or more disclosed embodiments. For example, Figure 7 illustrates an implementation in which the system 700 includes processor(s) 702, storage 704, sensor(s) 706, I/O system(s) 708, and communication system(s) 710. Although Figure 7 illustrates a system 700 as including particular components, one will appreciate, in view of the present disclosure, that a system 700 may comprise any number of additional or alternative components.
[0078] The processor(s) 702 may comprise one or more sets of electronic circuitries that include any number of logic units, registers, and/or control units to facilitate the execution of computer-readable instructions (e.g., instructions that form a computer program). Such computer- readable instructions may be stored within storage 704. The storage 704 may comprise physical system memory or computer-readable recording media and may be volatile, non-volatile, or some combination thereof. Furthermore, storage 704 may comprise local storage, remote storage (e.g., accessible via communication system(s) 710 or otherwise), or some combination thereof. Additional details related to processors (e.g., processor(s) 702) and computer storage media (e.g., storage 704) will be provided hereinafter.
[0079] As will be described in more detail, the processor(s) 702 may be configured to execute instructions stored within storage 704 to perform certain actions. In some instances, the actions may rely at least in part on communication sy stem(s) 710 for receiving data from remote sy stem(s) 712, which may include, for example, separate systems or computing devices, sensors, and/or others. The communications system(s) 710 may comprise any combination of software or hardware components that are operable to facilitate communication between on-system components/devices and/or with off-system components/devices. For example, the communications system(s) 710 may comprise ports, buses, or other physical connection apparatuses for communicating with other devices/components. Additionally, or alternatively, the communications system(s) 710 may comprise systems/components operable to communicate wirelessly with external systems and/or devices through any suitable communication channel (s), such as, by way of non-limiting example, Bluetooth, ultra- wideband, WLAN, infrared communication, and/or others.
[0080] Figure 7 illustrates that a system 700 may comprise or be in communication with sensor(s) 706. Sensor(s) 706 may comprise any device for capturing or measuring data representative of perceivable phenomenon. By way of non-limiting example, the sensor(s) 706 may comprise one or more antennae, monopoles, image sensors, microphones, thermometers, barometers, magnetometers, accelerometers, gyroscopes, and/or others.
[0081] Furthermore, Figure 7 illustrates that a system 700 may comprise or be in communication with I/O system(s) 708. I/O system(s) 708 may include any type of input or output device such as. by way of non-limiting example, a display, a touch screen, a mouse, a keyboard, a controller, and/or others, without limitation. [0082] The methods and/or operations described herein may be practiced by a computer system including one or more processors and computer-readable media such as computer memory'. In particular, the computer memory may store computer-executable instructions that when executed by one or more processors cause various functions to be performed, such as the acts recited in the embodiments. [0083] Computing system functionality can be enhanced by a computing systems’ ability7 to be interconnected to other computing systems via network connections. Network connections may include, but are not limited to, connections via wired or wireless Ethernet, cellular connections, or even computer to computer connections through serial, parallel, USB, or other connections. The connections allow a computing system to access services at other computing systems and to quickly and efficiently receive application data from other computing systems. [0084] Interconnection of computing systems has facilitated distributed computing systems, such as so-called "cloud” computing systems. In this description, “cloud computing” may be systems or resources for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services, etc.) that can be provisioned and released with reduced management effort or service provider interaction. A cloud model can be composed of various characteristics (e.g., on-demand self- service, broad network access, resource pooling, rapid elasticity, measured service, etc), service models (e.g., Software as a Service (“SaaS”), Platform as a Sendee (“PaaS”), Infrastructure as a Service (“laaS”), and deployment models (e.g., private cloud, community cloud, public cloud, hybrid cloud, etc ).
[0085] Cloud and remote based service applications are prevalent. Such applications are hosted on public and private remote systems such as clouds and usually offer a set of web based services for communicating back and forth with clients.
[0086] Many computers are intended to be used by direct user interaction with the computer. As such, computers have input hardware and software user interfaces to facilitate user interaction.
For example, a modem general purpose computer may include a keyboard, mouse, touchpad, camera, etc. for allowing a user to input data into the computer. In addition, various software user interfaces may be available.
[0087] Examples of software user interfaces include graphical user interfaces, text command line based user interface, function key or hot key user interfaces, and the like.
[0088] Disclosed embodiments may comprise or utilize a special purpose or general-purpose computer including computer hardware, as discussed in greater detail below. Disclosed embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system.
Computer-readable media that store computer-executable instructions are physical storage media. [0089] Physical computer-readable storage media includes RAM, ROM, EEPROM, CD- ROM or other optical disk storage (such as CDs, DVDs, etc), magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
[0090] A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry program code in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above are also included within the scope of computer-readable media.
[0091] Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission computer-readable media to physical computer-readable storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC’"), and then eventually transferred to computer system RAM and/or to less volatile computer-readable physical storage media at a computer system. Thus, computer-readable physical storage media can be included in computer system components that also (or even primarily) utilize transmission media. [0092] Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. [0093] Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
[0094] Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs). System-on-a-chip systems (SOCs). Complex Programmable Logic Devices (CPLDs), etc.
[0095] The present invention may be embodied in other specific forms without departing from its spirit or characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. A system for validating statistical information using name-entity recognition, the system comprising: one or more processors; and one or more computer-readable recording media that store instructions that are executable by the one or more processors to configure the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
2. The system of claim 1. wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics.
3. The system of claim 2, wherein the statistical detection model comprises a classifier model configured to process feature output of one or more initial models.
4. The system of claim 3, wherein the classifier model comprises a logistic regression model, a multi-layer perceptron, or a gradient boosting classifier.
5. The system of claim 3. wherein the one or more initial models include one or more of: one or more language models, one or more featurizers, or one or more embedding models.
6. The system of claim 1. wherein the embedding layer generates the set of embeddings by processing a set of tokens generated using the set of input text.
7. The system of claim 6, wherein the set of tokens is generated using a tokenizer that creates token boundaries for mathematical symbols and operators.
8. The system of claim 1, wherein the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings.
9. The system of claim 1, wherein the recurrent layer comprises a bidirectional long shortterm memory layer or an attention layer.
10. The system of claim 1, wherein the structured prediction layer comprises a conditional random field layer or a multi-layer perceptron layer.
1 1. The system of claim 1 , wherein the statistical information indicated by the label comprises one or more of: test name, sample size, test statistics, or probability value.
12. The system of claim 1, wherein the reported statistical result comprises a reported p-value, and wherein the computed statistical result comprises a computed p-value.
13. The system of claim 1, wherein the consistency label comprises one of: consistent, inconsistent, or grossly inconsistent.
14. A method for validating statistical information using name-entity recognition, the method comprising: accessing a set of input text; generating a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generating a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generating a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstructing a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generating a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
15. The method of claim 14, wherein the set of input text is determined by processing an input document using a statistical detection model trained to identify sets of text that include statistics.
16. The method of claim 14. wherein the embedding layer generates the set of embeddings by processing a set of tokens generated using the set of input text.
17. The method of claim 14, wherein the set of embeddings comprises a concatenation of contextual word embeddings and contextual string embeddings.
18. The method of claim 14, wherein the recurrent layer comprises a bidirectional long shortterm memory layer.
19. The method of claim 14, wherein the structured prediction layer comprises a conditional random field layer.
20. One or more computer-readable recording media that store instructions that are executable by one or more processors of a system to validating statistical information using name-entity recognition by configuring the system to: access a set of input text; generate a set of embeddings based on the set of input text using an embedding layer of a statistical extraction model; generate a set of contextual features by processing the set of embeddings using a recurrent layer of the statistical extraction model; generate a label sequence for the set of input text using a structured prediction layer of the statistical extraction model, wherein the label sequence indicates statistical information present in the set of input text, wherein the statistical information includes a reported statistical result; reconstruct a statistical test using at least part of the statistical information indicated to be present in the set of input text by the label sequence, wherein the statistical test provides a computed statistical result for comparison with the reported statistical result; and generate a consistency label for the reported statistical result based on a comparison of the computed statistical result with the reported statistical result.
PCT/US2024/046747 2023-09-14 2024-09-13 System for extracting and quantifying statistical problems in documents Pending WO2025059561A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363538344P 2023-09-14 2023-09-14
US63/538,344 2023-09-14

Publications (1)

Publication Number Publication Date
WO2025059561A1 true WO2025059561A1 (en) 2025-03-20

Family

ID=95022133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/046747 Pending WO2025059561A1 (en) 2023-09-14 2024-09-13 System for extracting and quantifying statistical problems in documents

Country Status (1)

Country Link
WO (1) WO2025059561A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055193A1 (en) * 2003-09-05 2005-03-10 Rosetta Inpharmatics Llc Computer systems and methods for analyzing experiment design
US20130198119A1 (en) * 2012-01-09 2013-08-01 DecisionQ Corporation Application of machine learned bayesian networks to detection of anomalies in complex systems
US20210090694A1 (en) * 2019-09-19 2021-03-25 Tempus Labs Data based cancer research and treatment systems and methods
US20210110277A1 (en) * 2019-10-15 2021-04-15 Accenture Global Solutions Limited Textual entailment
US20220115100A1 (en) * 2020-10-14 2022-04-14 nference, inc. Systems and methods for retrieving clinical information based on clinical patient data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055193A1 (en) * 2003-09-05 2005-03-10 Rosetta Inpharmatics Llc Computer systems and methods for analyzing experiment design
US20130198119A1 (en) * 2012-01-09 2013-08-01 DecisionQ Corporation Application of machine learned bayesian networks to detection of anomalies in complex systems
US20210090694A1 (en) * 2019-09-19 2021-03-25 Tempus Labs Data based cancer research and treatment systems and methods
US20210110277A1 (en) * 2019-10-15 2021-04-15 Accenture Global Solutions Limited Textual entailment
US20220115100A1 (en) * 2020-10-14 2022-04-14 nference, inc. Systems and methods for retrieving clinical information based on clinical patient data

Similar Documents

Publication Publication Date Title
Wang et al. MAVEN: A massive general domain event detection dataset
Song et al. Deep learning methods for biomedical named entity recognition: a survey and qualitative comparison
Chen et al. Automatically labeled data generation for large scale event extraction
Quan et al. Multichannel convolutional neural network for biological relation extraction
Gasmi et al. Information extraction of cybersecurity concepts: An LSTM approach
Guan Clinical relation extraction with deep learning
Kim et al. Extracting drug–drug interactions from literature using a rich feature-based linear kernel approach
Daraghmi et al. From text to insight: An integrated cnn-bilstm-gru model for arabic cyberbullying detection
US20210042344A1 (en) Generating or modifying an ontology representing relationships within input data
Qian et al. Multi-label vulnerability detection of smart contracts based on Bi-LSTM and attention mechanism
Bozuyla et al. Developing a fake news identification model with advanced deep languagetransformers for turkish covid-19 misinformation data
Narayanasamy et al. Ontology-enabled emotional sentiment analysis on COVID-19 pandemic-related twitter streams
Hernandez et al. An automated approach to identify scientific publications reporting pharmacokinetic parameters
Klein et al. Towards scaling Twitter for digital epidemiology of birth defects
Mahto et al. Emotion prediction for textual data using GloVe based HeBi-CuDNNLSTM model
Roman et al. Exploiting contextual word embedding of authorship and title of articles for discovering citation intent classification
Devkota et al. A Gated Recurrent Unit based architecture for recognizing ontology concepts from biological literature
Sakai et al. Large language models for healthcare text classification: A systematic review
Bucos et al. Enhancing fake news detection in Romanian using transformer-based back translation augmentation
Liu et al. Integration of NLP2FHIR representation with deep learning models for EHR phenotyping: a pilot study on obesity datasets
Hussain et al. ORUD-Detect: A Comprehensive Approach to Offensive Language Detection in Roman Urdu Using Hybrid Machine Learning–Deep Learning Models with Embedding Techniques
Shukla et al. Stacked classification approach using optimized hybrid deep learning model for early prediction of behaviour changes on social media
Shukla et al. RETRACTED: A comprehensive survey on sentiment analysis: Challenges and future insights
Bhuiyan et al. Understanding Mental Health Content on Social Media and Its Effect Towards Suicidal Ideation
Asgari-Bidhendi et al. PERLEX: A Bilingual Persian‐English Gold Dataset for Relation Extraction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24866448

Country of ref document: EP

Kind code of ref document: A1