HK1084845B - Managing and processing self-monitoring blood glucose - Google Patents
Managing and processing self-monitoring blood glucose Download PDFInfo
- Publication number
- HK1084845B HK1084845B HK06104965.7A HK06104965A HK1084845B HK 1084845 B HK1084845 B HK 1084845B HK 06104965 A HK06104965 A HK 06104965A HK 1084845 B HK1084845 B HK 1084845B
- Authority
- HK
- Hong Kong
- Prior art keywords
- hba
- data
- smbg
- readings
- rhi1
- Prior art date
Links
Description
Related patent application
This patent application is the domestic filing of international patent application No. pct/US2003/025053 filed at 8 months of 2003 and claiming priority from the following patent application, U.S. provisional patent application serial No.60/402,976 filed at 13 days of 2002 and entitled "method, system and computer program product for processing self-monitoring blood glucose (SMBG) data to enhance self-management of a diabetic patient", filed at 13 days of 2003 and entitled "method, system and computer program product for processing self-monitoring blood glucose (SMBG) data to enhance self-management of a diabetic patient", and No.60/478,377 filed at 13 days of 2003 and entitled "method, system and computer program product for processing self-monitoring blood glucose (SMBG) data to enhance self-management of a diabetic patient", the entire disclosures of which are incorporated herein by reference.
The patent applications relate to international application No. pct/US01/09884, filed on 3/29/2001 (patent application nos. WO 01/72208a2, WO 01/72208A3), entitled "method, system and computer program product for assessing glycemic control of diabetes from self-monitoring data", and U.S. patent application serial No. 10/240,228, filed on 9/26/2002, entitled "method, system and computer program product for assessing diabetic glycemic control from self-monitoring data", the entire disclosures of which are incorporated herein by reference.
Technical Field
The present system relates generally to glycemic control of diabetic individuals and, more particularly, to a computer-based system and method for assessing predictive glycosylated hemoglobin (HbA)1cAnd HbA1) And the risk of hypoglycemia.
Background
Numerous studies have repeatedly demonstrated that The most effective way to prevent Long-Term Complications Of Diabetes is to tightly control Blood Glucose (BG) levels within normal ranges by Insulin potentiation therapy (DCCT) (see DCCT research conference: The Effect Of Diabetes potentiation therapy On The development and progression Of Long-Term Complications Of Insulin-dependent Diabetes (The Effect Of Intensive Diabetes Mellitus On The development and progression Of Long-Term Complications Of Insulin-dependent Diabetes Mellitus (The Effect Of Diabetes Mellitus filling), New England and Journal Of medicine 329: 978-: mortality and treatment side Effects during Conventional insulin Long-term intensive therapy in the Stockholm Diabetes Intervention Study (Mortality and treatment side Effects in the Long-term intensive informed consenting Study of insulin in the Diabetes interventions Study, 3:313-317, 1994)), and the United kingdom prospective Diabetes Study (see the United kingdom prospective Diabetes Study society: the effects of potentiating glycemic Control With Metformin on complications In Type 2 diabetic Patients (Effect of Intensive Blood Glucose Control With Diabetes In Patents With Type 2 Diabetes) (UKPDS 34).Lancet,352:837-853,1998)。
However, The same study also demonstrated some side effects of insulin potentiation, most seriously increasing The risk of frequent Severe Hypoglycemia (SH), an episode of Severe Hypoglycemia that is refractory to self-treatment and requires external assistance to recover (see DCCT, Epidemiology of Severe Hypoglycemia In Diabetes Control and complications experiments (epidemic of Diabetes In The Diabetes controls and complications Trial), American Journal of Medicine, 90: 450-. Because SH can lead to accidents, coma and even death, patients and health care providers are frustrated by continuing intensive therapy. As a result, Hypoglycemia is recognized as a major obstacle to improved glycemic control (chromium PE: Hypoglycemia is the Limiting Factor in Diabetes Management. Diabetes mellitus, Metab ResRev, 15:42-46, 1999).
Thus, diabetic patients face a lifelong optimization problem, i.e. maintaining tight glycemic control without increasing the risk of hypoglycemia. The main challenge associated with this problem is to create a simple and reliable method that can simultaneously assess the glycemic control of a patient and its risk of hypoglycemia, and can be used in a daily setting.
It has been known for two decades that glycosylated hemoglobin is a marker for glycemic control in diabetic (type 1 or type 2) individuals. This relationship has been studied by a number of researchers and it has been found that glycosylated hemoglobin substantially reflects the average BG level of a patient over the past two months. Since BG levels fluctuate considerably over a period of time in most diabetic patients, global glycemic control and HbA are recommended1cThe actual correlation between them can only be observed if the patient is known to be under stable glycemic control over a longer period of time.
Early studies in this group of patients established mean BG levels and HbA over the first 5 weeks1cAn almost deterministic relationship between each other and this curve relationship results in a correlation coefficient of size 0.98 (see Aaby svendasen P, Lauritzen T, Soegard U, Nerup J (1982). Glycosylated Hemoglobin and Steady-State Blood glucose concentration in Type 1(Insulin-Dependent) Diabetes mellitus) with a Steady-State Mean Blood glucose concentration in Diabetes mellitus Type 1(Insulin-Dependent) Diabetes mellitus.Diabetologia,23,403-405). In 1993, DCCT concluded that HbA1cIs a "reasonable recommendation" for the gold-standard glycosylated hemoglobin assay, and DCCT identifies the previous plateauHomogeneous BG and HbA1cThe linear relationship between (see Santiago JV (1993) from the lessons of Diabetes Control and Complications experiments (Lessons from the Diabetes controls and Complications Trial),Diabetes,42,1549-1554)。
guidelines have been proposed that show 7% HbA1cCorresponding to an average BG of 8.3mM (150mg/dl), HbA of 9%1cAverage BG corresponding to 1.7mM (210mg/dl) and HbA1cA1% increase corresponds to a 1.7mM increase in mean BG (30mg/dl, 2). DCCT also suggests that a single simple test, HbA, can be used because it is not practical to measure the average BG directly1cAnd assessing glycemic control of the patient. However, studies clearly show that HbA1cIt is not sensitive to hypoglycemia.
Indeed, no reliable predictor of the direct risk of patient SH is available from any of the data. DCCT concludes that only about 8% of the future SH can be determined by known parameters, such as SH history, low HbA1cAnd hypoglycemia coma (unawareness), which is predicted. A recent review details the current clinical status Of the problem and provides the patient and health Care staff with an available choice for SH prevention (see Bolli, GB: How To Ameliorate the Hypoglycemia problem In Intensive and non-Intensive treatment Of Type 1 Diabetes mellitus (How To improve Hypoglycemia In the Home To America the predible Of Diabetes In Intensive As Well As non-Intensive treatment Of Type I Diabetes.) Diabetes Care, 22, Supplement2: B43-B52, 1999).
Modern home BG monitors provide a means to make frequent BG measurements by self-monitoring BG (smbg). However, the problem with SMBG is that the data collected by the BG monitor is related to HbA1cAnd hypoglycemia. In other words, there is currently no reliable method to estimate HbA from SMBG readings1cAnd identifying impending hypoglycemia (see Bremer T and Gough DA: Is blood glucose predictable from previous values.
It is therefore an object of the present invention to link this lack by proposing three different, but mutually compatible, algorithms for estimating HbA from SMBG data1cAnd a risk of hypoglycemia to predict long-term and short-term risks of hypoglycemia and long-term risks of hyperglycemia.
The inventors have previously reported that the daily acquisition of SMBG data and the estimation of HbA1cAnd the risk of hypoglycemia, is due to the fact that the sophisticated methods of data collection and clinical assessment used in diabetes studies are rarely supported by diabetes-specific and mathematically sophisticated statistical processes.
In response to the need for statistical analysis that can account for the special distribution of BG, the inventors propose a symmetric transformation of the range of Blood Glucose measurements (see Kovatchev BP, Cox DJ, Gonder-Frederick LA and WL Clarke (1997), symmetry of the range of Blood Glucose measurements and Its Applications (symmetry of the Blood Glucose Measurement diseases Applications),Diabetes Care,201655) 1658) which was operated as follows. BG levels are measured in mg/dl in the United states and in mmol/L (or mM) in most other countries. The direct relationship for both scales was 18 mg/dl-1 mM. The overall BG value range given in most references is 1.1-33.3mM and is believed to cover virtually all observations. According to The DCCT recommendations (see DCCT research Association (1993), The effect Of Diabetes Intensive Treatment On The Development and progression Of Insulin-dependent diabetic Complications (The effect Of Intensive Treatment Of Diabetes On The Development Of Insulin resistance Of Long-Term Complications Of Insulin-dependent Diabetes Mellitus, New England Of Diabetes Mellitus, 329:978-986, 1993)), The target BG value range for diabetics, also known as The normoglycemia range, is 3.9-10mM, hypoglycemia when BG falls below 3.9mM, and hyperglycemia when BG rises above 10 mM. Unfortunately, this range is not numerically symmetrical-the hyperglycemic range (10-33.3mM) is broader than the hypoglycemic range (1.1-3.9Mm), and the normoglycemic rangeThe circumference (3.9-10mM) is not in the center of this range. The inventors have corrected this symmetry by introducing a transformation f (BG) which defines the range BG [1.1, 33.3 ]]The continuous function of (c), having a two parameter analytic form:
f(BG,α,β)=[(ln(BG))α-β],α,β>0
it satisfies the assumption;
a1 where f (33.3, α, β) ═ f (1.1, α, β) and
A2:f(10.0,α,β)=-f(3.9,α,β)。
then, f () is multiplied by a third scaling parameter to fix the minimum and maximum values of the transformed BG range to each otherAnd. These values are convenient because an arbitrary variable with a standard normal distribution is in the intervalWith a value of 99.8% in it. If BG is measured in mmol/L, the function f (BG, α, β) has a parameter α of 1.026, β of 1.861, and a proportionality parameter γ of 1.794, numerically solved according to assumptions a1 and a 2. If BG is measured in mg/dl, the calculated parameters are α -1.084, β -5.381, and γ -1.509.
Thus, when BG is measured in mmol/L, the symmetric transformation is f (BG) ═ 1.794[ (ln (BG))1.026-1.861]When BG is measured in mg/dl, symmetry is changed to f (BG) -1.509 [ (ln (BG))1.084-5.381]。
According to the symmetric transformation f (), the inventors introduced the low BG index, a new measure for estimating the risk of hypoglycemia from SMBG readings (see Cox DJ, Kovatchev BP, Julian DM, Gonder-Frederick LA, Polonsky WH, Schlundt DG, Clarke WL: IDDM the frequency of severe hypoglycemia in the blood glucose data can be derived from self-monitoring blood glucose dataA line of predictions (Frequency of cover Hypoglycemia In IDDM Can predictified From Self-Monitoring Blood Glu cose Data) Journal of clinical Endocrinology and Metabolism, 79: 1659-. Given a series of SMBG data, when f (BG)<At0, the Low BG index is calculated as 10.f (BG)2Otherwise it is 0. High BG indices, calculated symmetrically to the low BG index, have also been proposed, but have not found practical use.
Using the low BG index in the regression model, the inventors were able to interpret the 40% variance of SH events over the next 6 months from SH history and SMBG data, and later improved the prediction to 46% (see Kovatchev BP, Straume M, Farhi LS, Cox DJ: Estimating the rate of blood Glucose turnover and its Relationship to severe hypoglycemia (Estimating the Speed of blood Glucose Transitions and its Relationship to the quantity of blood Glucose by SevereHypoglycemia), Diabetes, 48: Supplement 1, A363, 1999).
Furthermore, the inventors have also proposed some information about HbA1cAnd SMBG (see Kovatchev BP, Cox DJ, Straume M, Farhi LS. for Self-monitoring of the relationship of the Blood glucose profile to Glycosylated Hemoglobin (Association of Self-monitoring Blood glucose profiles with Glycosylated Hemoglobin),In:Methods in Enzymology, vol.321:Numerical Computer Methods,Part Cmechael Johnson and Ludvig Brand, eds., Academic Press, NY; 2000).
These findings are part of the theoretical background of the present invention. To put this theory into practice, several key theoretical parameters are added, as will be explained in the following sections. In particular, three methods are proposed for estimating HbA1cLower, lowerLong and short term risk of glycemia. These methods were proposed based on, but not limited to, over 300,000 SMBG readings, severe hypoglycemia records, and HbA in 867 diabetic individuals1cDetailed analysis of the results.
The inventors therefore sought to improve upon the aforementioned limitations associated with conventional approaches, thereby providing a simple yet reliable method that could be used to simultaneously assess a patient's glycemic control and its risk of hypoglycemia, and could be used in their everyday environment.
Disclosure of Invention
The present invention comprises a data analysis method and computer-based system for simultaneously estimating the two most important components of diabetic glycemic control, HbA, from routinely collected SMBG data1cAnd risk of hypoglycemia. For the purposes of this document, BG self-monitoring (SMBG) is defined as a method for determining blood glucose in the natural conditions of a diabetic patient and includes methods currently used by SMBG devices that typically store 200-. Given this broad definition of SMBG, the present invention is directed to the simultaneous prediction of HbA by introducing a method that can be used to predict HbA1cAnd periods of high risk for hypoglycemia, the intelligent data interpretation component improves (but is not limited to) the performance of existing home blood glucose monitoring devices, as well as the performance of future production continuous monitoring devices through the same components.
One aspect of the present invention includes a method, system and computer program product for estimating HbA from SMBG data collected over a predetermined period of time, e.g., about 4-6 weeks1c. In one embodiment, the present invention provides a computerized method and system for estimating HbA of a patient from BG data collected over a predetermined period of time1c. The method (or system or computer usable medium) includes estimating HbA of the patient based on BG data collected over a first predetermined period of time1c. The method comprises preparing for use a predetermined sequence of mathematical formulas defined as followsTo estimate HbA1cThe data of (a): preprocessing data; estimating HbA using at least one of four predetermined formulas1c(ii) a And verifying the validity of the estimate by a sample selection criterion.
Another aspect of the invention includes a method, system, and computer program product for estimating the long-term probability of Severe Hypoglycemia (SH). The method uses SMBG readings over a predetermined period, e.g., 4-6 weeks, and predicts SH risk over the next approximately 6 months. In one embodiment, the present invention provides a computerized method and system for estimating the long-term probability of Severe Hypoglycemia (SH) from BG data collected over a predetermined period of time. The present method (or system or computer usable medium) includes estimating a long term probability of Severe Hypoglycemia (SH) or Moderate Hypoglycemia (MH) in a patient based on BG data collected over a predetermined duration. The method comprises the following steps: calculating an LBGI from the collected BG data; and estimating the number of future SH events using a predetermined mathematical formula based on the calculated LBGI.
Still another aspect of the present invention includes a method, system and computer program product for identifying periods of high risk for hypoglycemia within 24 hours (or other selected periods). This is achieved by calculating the short-term risk of hypoglycemia using SMBG readings collected over the previous 24 hours. In one embodiment, the present invention provides a computerized method and system for estimating the short-term risk of Severe Hypoglycemia (SH) from BG data collected over a predetermined period of time. The method (or system or computer usable medium) includes estimating a short-term probability of Severe Hypoglycemia (SH) in the patient based on BG data collected over a predetermined duration. The method comprises the following steps: calculating a scale value according to the collected BG data; and calculates a low BG risk value (RLO) for each BG data.
One aspect of an embodiment of the invention includes estimating HbA of a patient based on BG data collected over a first predetermined duration1cThe method (or alternatively a computer program). The method includes preparing a mathematical formula for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formulaThe method comprises the following steps of (1) preprocessing data; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
One aspect of an embodiment of the invention includes a method of estimating HbA of a patient based on BG data collected over a first predetermined duration1cThe system of (1). The system includes a database component operable to maintain BG data identified, and a processor programmed to prepare a mathematical formula for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as data preprocessing; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
One aspect of an embodiment of the invention includes a method of estimating HbA of a patient based on BG data collected over a first predetermined duration1cThe system of (1). The system includes a BG acquisition mechanism for acquiring BG data from a patient; a database component operable to maintain data identifying the BG; and a processor. The processor is programmed to prepare a mathematical formulation for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as data preprocessing; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
An aspect of an embodiment of the present invention includes a method of eliminating the need for a prior HbA1cEstimate HbA of a patient based on BG data collected over a first predetermined duration1cThe method (or an alternative computer program). The method includes preparing a mathematical formula for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as data preprocessing; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
An aspect of an embodiment of the present invention includes a method of eliminating the need for a prior HbA1cEstimate HbA of a patient based on BG data collected over a first predetermined duration1cThe system of (1). The system includes means operable to maintain identification of the BG dataAnd a processor. The processor is programmed to prepare a mathematical formulation for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as data preprocessing; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
An aspect of an embodiment of the present invention includes a method of eliminating the need for a prior HbA1cEstimate HbA of a patient based on BG data collected over a first predetermined duration1cThe system of (1). The system includes a BG acquisition mechanism for acquiring BG data from a patient; a database component operable to maintain data identifying the BG; and a processor. The processor is programmed to prepare a mathematical formulation for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as data preprocessing; validating BG data samples by a sample selection criterion; and estimating HbA if the sample is valid1c。
These aspects of the invention, as well as other aspects discussed throughout this document, can be combined to provide continuous information about the glycemic control of a diabetic patient and to improve monitoring of the risk of hypoglycemia.
These and other objects of the invention, together with the advantages and features thereof, will become more apparent from the description, drawings and claims set forth herein below.
Drawings
The foregoing and other objects, features and advantages of the invention, as well as the invention itself, will be more fully understood from the following description of the preferred embodiments, when read in conjunction with the accompanying drawings, wherein:
fig. 1 graphically provides the empirical and theoretical probability of developing moderate (dashed line) and severe (solid line) hypoglycemia within 1 month after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
Fig. 2 graphically provides the empirical and theoretical probability of developing moderate (dashed line) and severe (solid line) hypoglycemia within 3 months after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
Fig. 3 graphically provides the empirical and theoretical probability of developing moderate (dashed line) and severe (solid line) hypoglycemia within 6 months after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
Fig. 4 graphically provides the empirical and theoretical probability of developing moderate (dashed line) and severe (solid line) hypoglycemia 2 or more times within 3 months after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
Fig. 5 graphically provides the empirical and theoretical probability of developing moderate (dashed line) and severe (solid line) hypoglycemia 2 or more times within 6 months after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
FIG. 6 is a functional block diagram of a computer system for implementing the present invention.
Fig. 7-9 are schematic block diagrams of alternative variations of the relevant processors, communication connections and systems of the present invention.
Fig. 10 graphically provides empirical and theoretical probabilities of developing 3 or more moderate (dashed lines) and severe (solid lines) hypoglycemia within 6 months after SMBG estimation for each of the 15 risk level ranges defined by the low BG index of example No. 1.
FIG. 11 graphically illustrates the residual analysis of the present model, showing a similar normal distribution of the residuals to example No.1 training data set 1.
FIG. 12 graphically shows the residual analysis of the present model, showing a similar normal distribution to the example No.1 residual.
FIG. 13 graphically shows the statistical evidence given by example No.1 Normal probability FIG. 1.
FIG. 14 graphically provides the hit rate and ratio R in percent for example No.1udA smooth dependency between them.
FIG. 15 graphically provides the dependencies between prediction epochs and corresponding hit rates in example No. 1.
Fig. 16(a) - (B) graphically provide the predicted risk of T1DM significant hypoglycemia within 1 month from LBGI, ANOVA for the number of severe hypoglycemia events for each risk group (F7.2, p <0.001) and ANOVA for the number of moderate hypoglycemia events for each risk group (F13.9, p <0.001) in example No. 2.
Fig. 17(a) - (B) graphically provide the predicted risk of significant hypoglycemia by LBGI within 3 months of T1DM, ANOVA for the number of severe hypoglycemia events in each risk group (F9.2, p <0.001) and ANOVA for the number of moderate hypoglycemia events in each risk group (F14.7, p <0.001) in example No. 2.
Fig. 18(a) - (B) graphically provide the predicted risk of significant hypoglycemia by LBGI within 1 month of T2DM, ANOVA for the number of severe hypoglycemia events in each risk group (F ═ 6.0, p <0.001) and ANOVA for the number of moderate hypoglycemia events in each risk group (F ═ 25.1, p <0.001) in example No. 2.
Fig. 19(a) - (B) graphically provide the predicted risk of significant hypoglycemia by LBGI within 3 months of T2DM, ANOVA for the number of severe hypoglycemia events in each risk group (F ═ 5.3, p <0.01) and ANOVA for the number of moderate hypoglycemia events in each risk group (F ═ 20.1, p <0.001) in example No. 2.
Detailed Description
The present invention makes it possible, but not limited to, to produce an accurate method for estimating glycemic control of a diabetic patient, and includes firmware and software code for use in calculating key components of the method. For estimating HbA1c、SThe inventive method of H long-term probability and short-term risk of hypoglycemia can also be verified based on the large amount of data collected and will be discussed later herein. Finally, the approaches of these approaches can be combined into structured displays or matrices.
I. Estimating HbA
1c
One aspect of the present invention includes a method, system and computer program software for estimating HbA from SMBG data collected over a predetermined period of time, e.g., 4-6 weeks1c. In one embodiment, the present invention provides a computerized (or other type of) method and system for estimating HbA of a patient based on BG data collected over a predetermined duration1c. The method includes estimating HbA of the patient based on BG data collected over a first predetermined duration1cThe method includes preparing a mathematical formula for estimating HbA using a predetermined sequence1cThe data of (1). The mathematical formula is defined as preprocessing of data; estimating HbA using at least one of four predetermined formulas1c(ii) a And verifying the validity of the estimate by a sample selection criterion. The first predetermined duration can be about 60 days, or alternatively, the first predetermined duration ranges from about 45 days to about 75 days, or from about 45 days to about 90 days, or as desired. Data pre-processing for each patient includes converting plasma BG to whole blood BG mg/dl; converting BG measured in mg/dl units to mmol/l units; and calculating the low glycemic index (RLO1) and the high glycemic index (RHI 1). The pre-processing of the data for each patient uses a predetermined mathematical formula defined as follows: converting plasma BG to whole blood BG mg/dl by BG ═ PLASBG (mg/dl)/1.12; converting BG measured in mg/dl to mmol/l units by BGMM ═ BG/18; and calculating the low glycemic index (RLO1) and the high glycemic index (RHI 1). The pre-processing of the data further uses a predetermined mathematical formula defined as Scale ═ ln (BG)]1.0845-5.381, wherein BG is measured in mg/dl; risk1 ═ 22.765(Scale)2Where RiskLO is Risk1, if (BG less than about 112.5), there is therefore a Risk of LBGI,otherwise RiskLO is 0; RiskHI-Risk 1, if (BG greater than about 112.5), there is therefore a Risk of HGBI, otherwise RiskHI-0; BGMM1 ═ mean BGMM per patient; RLO 1-mean RsikLO per patient; RHI 1-mean RiskHI per patient; l06 — average RiskLO calculated only for nighttime readings, default if there are no nighttime readings; n06, N12, N24 are the percentage of SMBG readings in each time interval; NC 1-the total number of SMBG readings for the first predetermined duration; NDAYS is the number of days with SMBG readings within the first predetermined duration. N06, N12, N24 are the percentages of SMBG readings in time intervals of about 0-6:59, about 7-12:59, and about 18-23:59, respectively, or other desired percentages and interval numbers.
The method further includes assigning a group a value based on the patient's high BG index calculated using a predetermined mathematical formula. This formula may be defined as assigning a group of 0 if (RHI1 ≦ about 5.25 or if RHI1 ≦ about 16); if (RHI1> about 5.25 and if RHI1< about 7.0), then assign group 1; if (RHI1 ≧ about 7.0 and if RHI1< about 8.5), the assigned group is 2; and if (RHI1 ≧ about 8.5 and if RHI1< about 16), assigning a group of 3.
Next, the method may further comprise giving the estimate using a predetermined mathematical formula defined as:
e0 ═ 0.55555 ANG BGMM1+ 2.95; e1 ═ 0.50567 ANG BGMM1+0.074 ANG L06+ 2.69; e2 ═ 0.5555 ANG BGMM1-0.074 ANG L06+ 2.96; e3 is 0.44000 ANG BGMM1+0.035 ANG L06+ 3.65; and if (group ═ 1), then EST2 ═ E1, or if (group ═ 2), then EST2 ═ E2, or if (group ═ 3), then EST2 ═ E3, otherwise EST2 ═ E0.
The method comprises further modifying the estimate using a predetermined mathematical formula defined as: if (default (L06)), EST2 ═ E0, if (RLO1 ≦ about 0.5 and RHI1 ≦ about 2.0), EST2 ═ E0-0.25; if (RLO1 ≦ about 2.5 and RHI1> about 26), EST2 ═ E0-1.5 ANG RLO 1; and EST2 ═ EST2-0.08 if ((RLO1/RHI1) ≦ about 0.25 and L06> about 1.3).
Estimating HbA of a patient from BG data collected over a first predetermined duration1cHbA can be estimated by using at least one of four predetermined mathematical formulas1cAnd, the definition of the four formulas is as follows:
e)HbA1cEST2 as defined above or modified as above;
f)HbA1c0.809098, BGMM1+0.064540, RLO1-0.151673, RHI1+1.873325, where BGMM1 is the average value of BG (mmol/l), RLO1 is the low BG index, and RHI1 is the high BG index;
g)HbA1c0.682742 ANG HBA0+0.054377 ANG RHI1+1.553277, wherein HBA0 is a previous reference HbA employed within about a second predetermined time period prior to the estimating1cReadings where RHI1 is a high BG index; or
h)HbA1c0.41046 ANGSTROM BGMM +4.0775, where BGMM1 is the average value of BG (mmol/l). The second predetermined duration can be about 3 months; about 2.5 months to about 3.5 months; or about 2.5 months to 6 months; or as desired.
HbA is used only if the first predetermined duration sample meets at least one of the following four criteria1cThe estimated sample selection criteria verify the validity of the estimate:
b) a test frequency criterion wherein the first predetermined duration samples are on average at least large per day
About 1.5 to about 2.5 times; b) a test frequency criterion may be selected in which the average frequency of readings for the predetermined duration samples over the third predetermined sampling period is about 1.8 readings/day (or other desired average frequency);
e) data randomization Standard-1, where only when the ratio (RLO1/RHI1) > is about 0.005
HbA was verified and displayed1cEstimates where RLO1 is a low BG index and RHI1 is a high BG index; or
f) Data randomization criteria, in which only the ratio (NO 6)>About 3%) and HbA is verified and displayed1cEstimated, and wherein NO6 is the average of the nighttime readings. The third predetermined duration can be at least 35 days, ranging from about 35 days to about 40 days, or from about 35 days to about as long as the first predetermined duration, or as desired.
Long term probability of Severe Hypoglycemia (SH)
Another aspect of the invention includes a method, system, and computer program product for estimating the long-term probability of Severe Hypoglycemia (SH). The method uses SMBG readings for a predetermined period, e.g., about 4-6 weeks, and predicts the risk of SH in the subsequent about 6 months. In one embodiment, the present invention provides a computerized (or other type of) method and system for estimating the long-term probability of Severe Hypoglycemia (SH) in a patient based on BG data collected over a predetermined duration. A method of estimating a long term probability of Severe Hypoglycemia (SH) or Moderate Hypoglycemia (MH) in a patient from BG data collected over a predetermined duration of time includes: calculating the LBGI according to the collected BG data; and estimating the number of future SH events using a predetermined mathematical formula based on the calculated LBGI. The LBGI is calculated by the time point t1,t2,...,tnCollecting a series of BG readings x1,x2,...,xnMathematically defining:
wherein lbgi (BG;a)=10.f(BG)αif f (BG)>0, otherwise 0, and a ≈ 2, representing the weight parameter (or other desired weight parameter).
We define predetermined risk Ranges (RCAT), whereby each risk Range (RCAT) represents a range of values of the LBGI; and assigning the LBGI to at least one of said risk Ranges (RCAT). The hazard Range (RCAT) is defined as follows:
range 1, wherein the LBGI is less than about 0.25;
range 2, wherein said LBGI is between about 0.25 and about 0.50;
range 3, wherein said LBGI is between about 0.50 and about 0.75;
range 4, wherein said LBGI is between about 0.75 to about 1.0;
a range 5, wherein said LBGI is between about 1.0 and about 1.25;
a range 6, wherein said LBGI is between about 1.25 and about 1.50;
range 7, wherein said LBGI is between about 1.50 and about 1.75;
a range of 8, wherein said LBGI is between about 1.75 and about 2.0;
a range 9, wherein said LBGI is between about 2.0 and about 2.5;
a range of 10, wherein said LBGI is between about 2.5 and about 3.0;
a range 11, wherein said LBGI is between about 3.0 and about 3.5;
a range 12, wherein said LBGI is between about 3.5 and about 4.25;
a range 13, wherein said LBGI is between about 4.25 and about 5.0;
a range 14, wherein said LBGI is between about 5.0 and about 6.5; and
a range 15, wherein said LBGI is greater than about 6.5.
Then, the probability of occurrence of a selected number of SH events is defined for each of the specified risk Ranges (RCAT), respectively. Defining for each of said specified risk Ranges (RCAT) a probability of occurrence of the selected number of SH events within the next first predetermined duration, respectively, using the following formula: (x) 1-exp (-a.x)b),x>0, otherwise 0, wherein: a ≈ 4.19, b ≈ 1.75(a and/or b may be other desired values). The first predetermined duration can be about 1 month, ranging from 0.5 months to about 1.5 months, or ranging from about 0.5 months to about 3 months, or as desired.
Furthermore, the probability of occurrence of the selected number of SH events within the next second predetermined duration is defined for each of said specified risk Ranges (RCAT) using the formula F (x) 1-exp (-a.x)b),x>0, otherwise 0, where a ≈ 3.28 and b ≈ 1.50(a and/or b may be other desired values). The second predetermined duration can be about 3 months, ranging from about 2 months to about 4 months, or from about 3 months to about 6 months, or as desired.
Further, the probability of occurrence of the selected number of SH events within the next third predetermined duration is defined for each of the specified risk Ranges (RCAT) using the formula F (x) 1-exp (-a.x)b),x>0, otherwise 0, where a ≈ 3.06, b ≈ 1.45(a and/or b may be other phase values). The third predetermined duration can be about 6 months, ranging from about 5 months to about 7 months, or from about 3 months to about 9 months, or as desired.
Optionally, the probability of occurrence of a selected number of MH events within the next first predetermined time period (ranging from about 1 month, from about 0.5-1.5 months, from about 0.5-3 months, or as desired) is defined for each of said specified risk Ranges (RCAT) using the formula F (x) 1-exp (-a.x)b),x>0, otherwise 0, where a ≈ 1.58 and b ≈ 1.05 (where a and/or b may be other desired values).
Optionally, the probability of occurrence of a selected number of MH events within the next second predetermined duration (ranging from about 3 months, from about 2-4 months, from about 3-6 months, or as desired) is defined for each of said specified risk Ranges (RCAT) using the formula F (x) -1-exp (-a.x)b),x>0, otherwise 0, where a ≈ 1.37, b ≈ 1.14(a and/or b may be other desired values).
Optionally, the probability of occurrence of a selected number of MH events within the next third predetermined duration (ranging from about 6 months, from about 5-7 months, from about 3-9 months, or as desired) is defined for each of said specified risk Ranges (RCAT) using the formula F (x) -1-exp (-a.x)b),x>0, otherwise 0, where a ≈ 1.37 and b ≈ 1.35(a and/or b may be other prophetic values).
Moreover, a classification of the risk of the patient to develop significant hypoglycemia in the future is specified. The classification is defined as follows: a minimum risk, wherein the LBGI is less than about 1.25; low risk, wherein the LBGI is from about 1.25 to about 2.50; a moderate risk, wherein said LBGI is between about 2.50 and about 5; and high risk, where the LBGI is greater than about 5 (other classification ranges can also be implemented as desired).
Short-term probability of Severe Hypoglycemia (SH)
Yet another aspect of the present invention includes a method, system and computer program product for identifying periods of high risk of hypoglycemia within 24 hours (or other selected periods). This is achieved by calculating the short term risk of hypoglycemia using SMBG readings collected over the previous 24 hours. In one embodiment, the present invention provides a computerized method and system for estimating the short-term risk of Severe Hypoglycemia (SH) in a patient based on BG data collected over a predetermined duration. A method of estimating a short-term risk of Severe Hypoglycemia (SH) in a patient from BG data collected over a predetermined duration of time includes: calculating a scale value according to the collected BG data; and calculating a low BG risk value (RLO) for each BG data. RLOThe calculation of (BG) is mathematically defined as Scale ═ ln (BG)]1.0845-5.381, wherein BG is measured in mg/dl; risk 22.765(Scale)2If (BG less than about 112.5), then RLO (BG) Risk, otherwise RLO (BG) 0. Alternatively, the calculation of RLO (BG) is mathematically defined as Scale ═ ln (BG)]1.026-1.861, wherein BG is measured in mmol/l; risk 32.184(Scale)2If (BG is less than about 112.5), then rlo (BG) ═ Rsik, otherwise rlo (BG) ═ 0.
The LBGI can be calculated from the collected BG data. The LBGI is calculated by the time point t1,t2,...,tnCollecting a series of BG readings x1,x2,...,xnMathematically defining:
wherein lbgi (BG; a) ═ RLO (BG).
The provisional LBGI can be calculated from the collected BG data. The calculation of the provisional LBGI is mathematically defined as:
LBGI (1) ═ RLO (x 1); RLO2(1) ═ 0; LBGI (j) ((j-1)/j) < LBGI (j-1) + (1/j) < rlo (xj); and RLO2(j) ((j-1)/j) RLO2(j-1) + (1/j) RLO (xj) -LBGI (j)2。
The SBGI can be calculated using the mathematical formula defined as:
the present invention, in turn, provides authentication (qualification) and alerting of the impending short-term SH. Authentication and alerting are performed if (LBGI (150) ≥ 2.5 and (LBGI (50) ≥ 1.5 ANG LBGI (150) and SBGI (50) ≥ SBGI (150)), the alert is confirmed or issued, or RLO ≥ LBGI (150) +1.5 ANG (SBGI (150)), the alert is confirmed or issued, otherwise authentication or alert is not required.
The present invention then optionally provides for authentication or alerting of the impending short-term SH. Authentication and alerting are performed if (LBGI (n) ≥ α and LBGI (n) ge (β)) said alert is acknowledged or issued, and/or (RLO (n) ≥ LBGI (n) + γ SBGI (n)) said alert is acknowledged or issued, otherwise no authentication or alert is required, where α, β and γ are threshold parameters.
The threshold parameters α, β, and γ are defined as α ≈ 5, β ≈ 7.5, and γ ≈ 1.5. Other possible parameter combinations are given in the table below. The values may be similar to the values given below or any intermediate combination of the values in the table below.
| α | β | γ | α | β | γ |
| 6.4 | 8.2 | 1.5 | 5.0 | 7.5 | 1.3 |
| 6.0 | 7.5 | 1.5 | 4.9 | 7.0 | 1.2 |
| 5.5 | 7.5 | 1.5 | 4.9 | 7.0 | 1.2 |
Example System
The method of the present invention may be implemented in hardware, software, or a combination thereof, and can be implemented in one or more computer systems or other processing systems, such as a Personal Digital Assistant (PDA), or directly in a blood glucose self-monitoring device (SMBG storage meter) having sufficient storage and processing capabilities. In one example embodiment, the present invention is software running on a general purpose computer 900 as shown in FIG. 6. Computer system 600 includes one or more processors, such as a processor 604. The processor 604 is connected to a communication infrastructure 606 (e.g., a communication bus, cross-over bar, or network). Computer system 600 may include a display interface 602 that transfers charts, text, and other data from communication infrastructure 606 (or from a frame buffer not shown) for display on display unit 630.
Computer system 600 also includes a main memory 608, preferably Random Access Memory (RAM), and may also include a secondary memory 610. Secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, etc. Removable storage drive 614 reads from and/or writes to a removable storage unit 618 in a well known manner. Removable storage unit 618, represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 614. As will be appreciated, the removable storage unit 618 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative embodiments, secondary memory 610 may include other means for allowing computer programs or other instructions to be loaded into computer system 600. Such means may include, for example, a removable storage unit 622 and an interface 620. Examples of such removable storage units/interfaces include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as a ROM, PROM, EPROM or EEPROM) and associated socket, and other removable storage units 622 and interfaces which allow the transfer of sockets and data from the removable storage unit 62 to the computer system 600.
Computer system 600 may also include a communications interface 624. Communications interface 624 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 624 include a modem, a network interface (e.g., ethernet card), a communications port (e.g., serial or parallel, etc.), PCMCIA slot and card, a modem, and the like. Software and data transferred via communications interface 624 are in the form of signals 628 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 624. Signals 628 are provided to communications interface 624 via a communications path (i.e., channel) 626. The channel 626 carries the signals 628 and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone connection, an RF connection, an infrared connection, and other communication channels.
In this document, the terms "computer program medium" and "computer usable medium" are used to generally refer to media such as removable storage drive 614, a hard disk installed in hard disk drive 612, and signals 628. These computer program products are means for providing software to computer system 600. The present invention includes such a computer program product.
Computer programs (also called computer control logic) are stored in main memory 608 and/or secondary memory 610. Computer programs may also be received via communications interface 624. Such computer programs, when executed, enable computer system 600 to perform the features of the present invention as discussed below. In particular, the computer programs, when executed, enable the processor 604 to perform the functions of the present invention. Accordingly, such computer programs represent controllers of the computer system 600.
In one embodiment where the invention is implemented in software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 614, hard drive 612 or communications interface 624. The control logic (software), when executed by the processor 604, causes the processor 604 to perform the functions of the invention as described below.
In another embodiment, the invention is implemented primarily in hardware using, for example, hardware components such as device specific integrated circuits (ASICs). The execution of a hardware state machine to perform the functions described herein will be apparent to those skilled in the relevant art.
In another embodiment, the invention is implemented using a combination of hardware and software.
In an exemplary software embodiment of the present invention, the above method is implemented in the SPSS control language, but can be implemented in other programs, such as, but not limited to, C + + programming language or other programs available to those skilled in the art.
Fig. 7-9 show block diagrams representing alternative embodiments of the present invention. Referring to fig. 7, a block diagram representing a system 710 is shown that generally includes a glucose meter 728 used by the patient 712 to record, among other things, insulin dose readings and measured blood glucose ("BG") levels. The data obtained by the glucose meter 728 is preferably transferred via a suitable communication connection 714 or data modem 732 to a processing station or chip, such as a personal computer 740, a PDA, or via a portable telephone or via a suitable internet port. For example, the stored data may be stored in the glucose meter 728 and downloaded directly to a personal computer via a suitable interface cable, and then transmitted to the processing station via the internet. ONE example is the ONE TOUCH monitoring system or meter manufactured by Lifescan, Inc., which is compatible with IN TOUCH software and includes an interface cable for downloading to a personal computer.
Blood glucose meters are common in the industry and primarily include any device capable of functioning as a BG acquisition mechanism. BG meters or acquisition mechanisms, devices, tools or systems include various conventional methods for extracting a blood sample for each test (e.g., by finger stick) and include determining the glucose level using a device that reads the glucose concentration by a motor or by the cloorimetric method. Recently, various methods have been developed to determine blood analyte concentrations without the need for blood withdrawal. For example, U.S. patent No.5,267,152 to Yang et al, which is incorporated herein by reference, describes a non-invasive technique for measuring blood glucose concentration using near IR radiation diffuse reflectance laser spectroscopy. Similar near-IR spectroscopy devices are described in U.S. Pat. No.5,086,229 to Rosenthal et al, and U.S. Pat. No.4,975,581 to Robinson et al (incorporated herein by reference).
U.S. patent No.5,139,023 to Stanley (incorporated herein by reference) describes a transdermal blood glucose monitoring device that relies on a permeability enhancer (e.g., bile salts) to facilitate the transdermal movement of glucose along a concentration gradient established between interstitial fluid and a receiving medium. U.S. patent No.5,036,861 to Sembrowich (incorporated herein by reference) describes a passive glucose monitor that collects sweat through a skin patch, where cholinergic agents are used to stimulate sweat secretion from sweat glands. Similar sweat collection devices are described in U.S. patent No.5,076,273 to Schoendorfer and U.S. patent No.5,140,985 to Schroeder (incorporated herein by reference).
Further, U.S. patent No.5,279,543 to Glikfeld (incorporated herein by reference) describes the use of iontophoresis to non-invasively sample a substance through the skin into a reservoir at the surface of the skin. Glikfeld teaches that this sampling process can be coupled to a glucose-specific biosensor or glucose-specific electrode to monitor blood glucose. Furthermore, International publication No. WO 96/00110 to Tamada (incorporated herein by reference) describes an iontophoresis device for transcutaneous monitoring of a target substance, wherein iontophoresis electrodes are used to move an analyte into a collector and a biosensor is used to detect the target analyte in a container. Finally, U.S. patent No.6,144,869 to Berner (incorporated herein by reference) describes a sampling system for measuring the concentration of an analyte present.
Further, the BG meter or acquisition structure may also include an internal catheter and subcutaneous tissue fluid sampling.
The computer or PDA 740 includes software and hardware necessary to process, analyze and interpret the self-recorded diabetic patient data according to a predetermined flow sequence (as described in detail above) and generate an appropriate data interpretation output. Preferably, reports are generated for display by a printer connected to the personal computer 740 based on the results of data analysis and interpretation performed on patient data stored by the computer 740. Alternatively, the results of the data interpretation program may be displayed directly on a video display unit connected to the computer 740.
FIG. 8 shows a block diagram representing an alternative embodiment having a diabetes management system, which is a patient-operated device 810 whose housing (housing) is preferably compact enough to enable the device 810 to be hand-held and carried by the patient. A strip guide for receiving a blood glucose test strip (not shown) is carried on a surface of the housing 816. The test strip is used to receive a blood sample from a patient. The apparatus includes a microprocessor 822 and memory 824 coupled to the microprocessor. Microprocessor 822 is designed to execute computer programs stored in memory 824 in order to perform various computing and control functions, as described in detail above. Keypad 816 is connected to microprocessor 822 through a standard keypad decoder 826. Display 814 is coupled to microprocessor 822 through display driver 830. Microprocessor 822 communicates with display driver 830 via an interface, and display driver 830 modifies and refreshes display 814 under the control of microprocessor 822. A speaker 854 and a clock 856 are also connected to microprocessor 822. Speaker 854 operates under the control of microprocessor 822 to emit audible speech to alert the patient to possible future hypoglycemia. Clock 856 provides the current date and time to microprocessor 822.
Memory 824 also stores blood glucose values, insulin dosage values, insulin types for patient 812 and parameter values used by microprocessor 822 to calculate future blood glucose values, supplemental insulin dosages, and carbohydrate supplements. Each blood glucose value and insulin dosage value is stored with a corresponding date and time in a memory 824, the memory 824 preferably being a non-volatile memory such as an electrically erasable programmable read-only memory (EEPROM).
Device 810 also includes a blood glucose meter 828 coupled to microprocessor 822. The glucose meter 828 is designed to measure a blood sample received on the blood glucose test strip and to generate a blood glucose value for the blood sample measurement. As previously mentioned, such glucose meters are well known in the art. The glucose meter 828 is preferably of the type that produces a digital value that is output directly to the microprocessor 822. Alternatively, the type of blood glucose meter 828 may be one that generates an analog value. In this alternative embodiment, blood glucose meter 828 is coupled to microprocessor 822 through an analog-to-digital converter (not shown).
Device 810 further includes an input/output port 834, preferably comprising a series of ports for connection to microprocessor 822. Port 834 interfaces with modem 832, preferably via a standard RS232 interface. The modem 832 is used to establish communications between the device 810 and a personal computer 840, or a health care provider's computer 838 over a communications network 836. Specialized techniques for connecting electronic devices via connecting cables are well known in the art. Another alternative example is "bluetooth" technology communication.
Alternatively, the block diagram shown in FIG. 9 represents an alternative embodiment having a diabetes management system that is a patient-operated device 910, similar to that shown in FIG. 8, whose housing (housing) is preferably compact enough to enable the device 910 to be hand-held and carried by the patient. Such as a separate or detachable glucose meter or BG acquisition mechanism/module 928. Self-monitoring devices have long been available that are capable of calculating algorithms 1, 2, 3 directly and displaying the results to the patient without sending data to anything else. Examples of such equipment are ULTRA SMART, Therasense, Alameda, Milpitas, CA and FREESTYLE TRACKER, manufactured by Lifescan Inc.
Thus, the embodiments described herein can be implemented over a data communications network, such as the Internet, so that any processor or computer at any remote location can obtain the evaluations, estimates and information, as described in FIGS. 6-9 and/or U.S. Pat. No.5,851,186 to Wood, the contents of which are incorporated herein by reference. Alternatively, a patient at a remote location may send BG data to a central health care provider or medical facility or to a different remote location.
In summary, the present invention proposes a computerized (or non-computerized) data analysis method and system for simultaneous estimation of two most important components in glycemic control of diabetic individuals: HbA1cAnd risk of hypoglycemia. The present method, while using only daily SMBG data, provides, among other things, three sets of outputs.
The method, system and computer program product of the present invention have the potential to provide the following advantages, but are not limited thereto. Firstly, the invention improves the performance of the existing household BG monitoring equipment by executing and displaying the following contents: 1) estimating HbA1c2) estimate the probability of SH in the following 6 months, and 3) estimate the short-term risk of hypoglycemia (i.e., 24 hours in the future). The latter may include warnings, such as alarms, indicating an impending hypoglycemic event. These three components can also be combined to provide continuous information about glycemic control of diabetic individuals, thereby improving the monitoring of the risk of hypoglycemia.
As an additional advantage, the present invention improves the performance of existing SMBG data retrieval software or hardware. Almost every home BG monitoring deviceThe manufacturer of (a) produces such software or hardware, and patients and health care personnel typically use it to interpret SMBG data. The method and system of the present invention can be incorporated directly into existing home blood glucose monitors, or by introducing a device that can predict HbA simultaneously1cAnd the data interpretation component in the high risk period of hypoglycemia improves the performance of SMBG data retrieval software.
Another advantage is that the present invention can assess the accuracy of home BG monitoring devices in both the low and high BG ranges and throughout the numerical range of BG.
Moreover, another advantage, the present invention enables evaluation of the effectiveness of various diabetes therapies.
Still further, because diabetics face an optimization problem for their lifetime without increasing their risk of hypoglycemia while maintaining tight glycemic control, the present invention alleviates this related problem by using its simple and reliable method, i.e., it is capable of assessing both the patient's glycemic control and its risk of hypoglycemia, while being able to be applied under the patient's daily conditions.
In addition, the present invention gives a missing connection by proposing three distinct but mutually compatible algorithms, all of which are used to estimate HbA from SMBG data1cAnd risk of hypoglycemia for predicting short and long-term risk of hypoglycemia and long-term risk of hyperglycemia.
Another advantage is that the present invention can assess the effectiveness of new types of insulin or insulin delivery (delivery) devices. Any manufacturer or researcher of insulin or insulin delivery devices can test the relative success of the insulin type or device delivery designs they propose or detect using embodiments of the present invention.
Finally, another advantage is that the present invention enables the evaluation of the effectiveness of insulin adjunctive therapeutic drugs.
Examples of the invention
I. Example No.1
Example No.1 includes three algorithms for simultaneously estimating the two most important components in diabetic glycemic control, HbA, from daily SMBG data1cAnd risk of hypoglycemia. The method is directly suitable for simultaneous prediction of HbA by introducing1cAnd the intelligent data interpretation component in the high risk period of hypoglycemia improves the performance of the existing household BG monitoring equipment. The data analysis method has three parts (algorithms):
● Algorithm 1: estimating HbA1c
● Algorithm 2: estimating the long-term risk of Severe Hypoglycemia (SH), and
● Algorithm 3: short-term (within 24-48 hours) risk of hypoglycemia is estimated.
Algorithms 1 and 2 provide ongoing monitoring and information about the overall glycemic control of individuals with type 1 or type 2 diabetes (T1DM, T2DM) while covering the upper and lower limits of the BG value range. Algorithm 3 is activated when algorithm 2 indicates an increased long-term risk of hypoglycemia. Once activated, algorithm 3 requires more frequent monitoring (4 times per day) and provides a 24-48 hour forecast of moderate/severe hypoglycemia.
Another important objective of example 1 is to examine a large number of hypotheses and views with existing data, which have the potential to generate other algorithms that estimate HbA in a conceptually different manner than provided by the present disclosure1cAnd calculating the risk of hypoglycemia. The goal was to find potentially better solutions, or simply to confirm that certain ideas were unable to produce better results, which was primarily used to optimize and improve the analysis of the data currently collected in this study example No. 2.
Data set(data sets)
To ensure that our optimization results can be generalized to a broader level, algorithms 1 and 2 are first optimized with a training data set and then checked for accuracy with an unrelated test data set. For algorithm 3 we currently have only one set of data containing parallel SMBG and SH records. The details of the patient demographics (population) are as follows:
(1)training data set 196 patients were diagnosed with TIDM at least 2 years prior to the study. Of these 43 patients reported at least 2 severe hypoglycemic events over the past year, 53 reported no such events at the same time. Among them 38 men and 58 women. The mean age was 35 + -8 years, the mean duration of diabetes was 16 + -10 years, the mean insulin per day was 0.58 + -0.19 units/kg, the mean HbA1c8.6 +/-1.8 percent. These subjects collected approximately 13,000 SMBG readings over a period of 40-45 days. The frequency of SMBG was about 3 readings per day. These data collections were continued for 6 months, with monthly records of moderate and severe hypoglycemic events. This data set was used as Algorithm 1 (no previous HbA)1c) And a training data set for algorithm 2.
(2)Training data set85 patients were diagnosed with TIDM at least 2 years ago in this study, and all reported an SH event in the past year. Among them, 44 men and 41 women. The mean age is 44 + -10 years, the mean duration of diabetes is 26 + -11 years, the mean insulin per day is 0.6 + -0.2 units/kg, and the mean baseline HbA1c7.7. + -. 1.1% and an average of 6 months HbA1c7.4. + -. 1% (60 subjects had 6 months HbA)1c). These subjects had HbA twice1cApproximately 75,500 SMBG readings were collected over the 6 months between assays. The SMBG frequency was higher in data set 2, with 4-5 readings/day. Furthermore, during 6 months of SMBG, these subjects kept records of moderate and severe hypoglycemic events and their occurrence date and time, with 399 SH events as a result. This data set was used as Algorithm 1 (with previous HbA)1c) And the training data set for all analyses related to algorithm 3.
(3)Test data setWe used data for N600 objects, 277 with T1DM and 323 with T2DM, all of which were used to determine the number of objectsInsulin is used for treating diabetes. These data were collected by pharmacists of Amylin, san Diego, USA, and included 6-8 months of SMBG data (approximately 300,000 readings) with baseline and 6 months of HbA1cResults, and some demographic data. These subjects participated in a clinical trial study of pramlintide (at doses of 60-120 micrograms) for metabolic control. The subjects in the T1DM and T2DM groups were randomized using pramlintide (table 1).
TABLE 1 demographic characteristics of subjects in test data sets
| Variables of | T1DM mean (SD) | T1DM mean (SD) | P level |
| Age (year of old) | 38.0(13.4) | 58.1(9.4) | <0.001 |
| Sex: male/female | 136/41 | 157/166 | Ns |
| Standard HbA | 9.74(1.3) | 9.85(1.3) | Ns |
| HbA 6 months old | 8.77(1.1) | 8.98(1.3) | 0.04 |
| Duration of diabetes | 14.6(9.8) | 13.5(7.6) | Ns |
| Starting age (year of age) | 23.4(12.8) | 44.6(10.4) | <0.001 |
| # SMBG readingObject/day | 3.2(1.1) | 2.9(0.9) | <0.005 |
Table 1 gives the demographics and the comparison of T1DM with T2DM objects. Mean HbA of the T1DM and T2DM groups at the first 6 months of the study1cAre significantly reduced, possibly due to the use of drugs, which is beyond the scope of this report (presentation) (table 1). HbA1cThe relatively fast changes allow a better assessment of the predictive power of the algorithm 1. Among all data sets, SMBG was performed with the ONE TOUCH II or ONETOUCH PROFILE meters of Life scan, Inc.
Algorithm 1 estimation of HbA
1c
Example No.1 shows, but is not limited to, HbA1cPrediction (algorithm 1) optimization by (1) the closer the SMBG is to the center, the higher the weight; (2) the longer the high BG event time, the higher the weight; (3) with earlier HbA1cCorrecting the high BG index; and (4) combining other patient variables such as age, sex, and duration of disease.
Algorithm 1 includes an optimization function of SMBG data, which estimates the subsequent HbA1cAnd recommend an optimal duration of the data collection period and an optimal frequency of self-monitoring during the period. It is important to note, however, that the broader purpose of algorithm 1 is to assess the state of glycemic control of a patient. Although HbA1cAccepted as the "gold standard" for assessing glycemic control, but it is currently unclear whether other measures, such as average SMBG or high BG index, are specific to HbA1cBetter forecasting factors of long-term complications of diabetes. The purpose of algorithm 1 is to estimate HbA before the problem is clarified1c. To be as close as possible to the practical application of algorithm 1 in the future, we proceed as follows:
(1) first, a number of optimization functions using different independent variables, optimal duration and optimal SMBG frequency were derived from two training data sets 1 and 2 we collected in a previous study with T1DM patients;
(2) then, all coefficients were fixed, and algorithm 1 was applied to a much larger test data set that included both data from T1DM and T2DM subjects collected under very different conditions in a clinical trial directed by the Amylin pharmacist;
(3) the accuracy of algorithm 1 for various optimization functions was evaluated in detail using only the test data set.
Training and test data set separation allows us to state that the estimation accuracy of algorithm 1 can be generalized to T1DM or any other data of T2DM subjects. Also, because the Amylin data (test data set) is receiving treatment from the positive to reduce its HbA1cAre collected from subjects of (1), and thus their HbA1cShowing abnormally large variation over a 6 month observation period, we can state that algorithm 1 is not only able to predict relatively constant HbA1cAnd can predict HbA which is large and abnormally rapidly changed1c. In the same row, Algorithm 1 optimizes its HbA as desired1cIs most useful, it can be speculated that this patient population is likely to be useful for seeking meters with advanced characteristics, such as continuous estimation of HbA1cOf interest.
Result summarization
● optimal SMBG data collection period is 45 days;
● the optimum SMBG frequency is 3 readings per day;
● proposes two optimal HbA1cAn estimation function: f1-use SMBG data only, and F2-use SMBG data plus add on predicted HbA1cHbA obtained about 6 months before1cReading;
● pairs HbA in test data sets (N573 objects) by multiple criteria as detailed in the following page (table 2)1cThe accuracy of the prediction is evaluated. We describe here the totality of F1 in T1DMPrecision (HbA)1cWithin 20% of the measurements) was 96.5%, while the overall accuracy of F2 was 95.7%. For T2DM, the overall accuracy of F1 was 95.9% and the overall accuracy of F2 was 98.4%. Therefore, the accuracy of both F1 and F2 is equal to HbA1cThe direct measurement results are equivalent;
● most important, for HbA1cPatients who changed 2 or more units from their baseline readings (N ═ 68), F1 predicted the change to 100% accuracy for either T1DM or T2DM, while F2 had 71% and 85% accuracy for T1DM and T2DM, respectively;
● F1 and F2 HbA for 6 months1cIs more than the initial HbA at month 01cThe estimation is much more accurate. However, average BG was used as HbA1cIs not accurate;
● A number of alternative methods have been tested, such as selecting a particular time point of day (post-prandial reading) to estimate HbA1cAccording to each SMBG reading and HbA1cDetermining the time between intervals gives different weights to SMBG readings for different mean blood glucose/HbA1cThe objects of the ratio are evaluated separately, etc. Although some of these alternatives achieve better results than the two functions presented above, none is better overall. We can conclude that the optimization functions F1 and F2 will be used in the future application of algorithm 1.
Detailed results-test data set
The most important part of the algorithm 1 evaluation is to evaluate its performance on data that is not relevant to the data used to propose and optimize it. From the test data set, 573 subjects' data, where N254 for T1DM and N319 for T2DM, are sufficient for evaluation algorithm 1.
Optimization algorithm 1For each subject, his/her SMBG reading for a subset of 45 days was selected. The start date of this subset is approximately HbA at 6 months of the subject1c75 days before the test, the end date is about 30 days before the test. Because in the dataIn group, HbA1cThe time of the assay is only known approximately, so the SMBG readings and HbA of the last analysis1cThe time between intervals is not exact. The time period is determined by continuously optimizing its duration and its end point (HbA)1cPrevious time) was selected. The optimal duration is 45 days. The optimal end time is HbA1c1 month before. In other words, 45 days of SMBG enabled prediction of HbA approximately 1 month earlier1cThe numerical value of (c). However, any HbA between 45-75 days is predicted1cThe values were all nearly equally good-the differences were only numerical and not clinical significance. Similarly, the difference between the 45-day monitoring period and the 60-day monitoring period is not large. However, a monitoring period of less than 45 days causes a rapid degradation of the predictive ability.
The best estimation function is linear and is given by:
estimate 1-not knowing the previous HbA1c:
F1=0.809098*BGMM1+0.064540*LBGI1-0.151673*RHI1+1.873325
Estimate 2Knowing the previous HbA1c(approximately 6 months ago)
F2=0.682742*HBA0+0.054377*RHI1+1.553277
In these formulas, BGMM1 is the mean blood glucose calculated from SMBG readings for 45 days; LBGI1 and RHI1 are low and high BG indices calculated from the same reading, HBA0 is the baseline HbA used only for estimate 21cAnd (6) reading. The coefficient values are optimized with a training data set and relevant statistics and plots are given in the detailed results, training data set section.
HbA is generated by functions F1 and F21cI.e. each function yields an HbA1cAnd (6) estimating. The interval estimate can be obtained using the detailed results, the regression error estimate provided in the training data set section. However, for the test data set, these interval estimates are not HbA1cTrue 90% or 95% confidence intervals, since they were originally derived from the training data set and only applied to the test data (see statistical attention in the next section).
Algorithm 1 precision evaluationTables 2A and 2B present the results of evaluating optimal algorithm 1 using test data sets of T1DM and T2DM subjects, respectively. A number of criteria are used:
(1) from HbA1cAbsolute deviation of measurement estimation (AERR);
(2) from HbA1cAbsolute percent deviation of measurement estimates (PERR);
(3)HbA1cpercentage estimate within 20% of the measurement (HIT 20);
(4)HbA1cpercentage reading within 10% of the measurement (HIT 10); and
(5)HbA1cpercentage readings outside the 25% interval around the measurement (MISS 25).
Table 2A accuracy of algorithm 1 in T1DM (N254 objects)
| F1 | F2 | Average BG | Previous HbA | P value | |
| AERR | 0.77 | 0.61 | 1.68 | 1.1 | <0.001 |
| PERR(%) | 8.3 | 7.1 | 19.4 | 12.8 | <0.001 |
| HIT20(%) | 96.5 | 95.7 | 61.0 | 81.0 | <0.001 |
| HIT10(10) | 65.4 | 75.5 | 29.9 | 48.2 | <0.001 |
| MISS 25(%) | 2.4 | 1.6 | 28.4 | 9.9 |
Table 2B accuracy of algorithm 1 in T2DM (N319 objects)
| F1 | F2 | Average BG | Previous HbA | P value | |
| AERR | 0.72 | 0.57 | 1.92 | 0.87 | <0.001 |
| PERR(%) | 7.6 | 6.4 | 20.9 | 11.7 | <0.001 |
| HIT20(%) | 95.9 | 98.4 | 56.4 | 82.8 | <0.001 |
| HIT10(10) | 70.2 | 79.3 | 29.5 | 53.3 | <0.001 |
| MISS 25(%) | 1.2 | 0.6 | 36.7 | 8.2 | <0.001 |
The first two columns of tables 2A and 2B give the results of the optimization functions F1 and F2, respectively. The third row shows when average BG (mmol/L) is used as HbA1cThe accuracy of the estimation is estimated. The fourth line shows HbA at01cAssay as 6 month HbA1cThe calculated same precision measurements are estimated. It is evident that for T1DM and T2DM, F2 is for estimating HbA1cAll slightly better than F1 as a whole. Most importantly, F1 and F2 are for HbA1cIs much better than with its earlier value or average BG. This is especially true for% estimates that fall outside the 25% precision interval. Performing F1 and F2 with previous HbA1cThe differences estimated by the assay run are highly significant (column 4).
Statistical attentionIt is important to note that conventional regression-type criteria, such as R obtained from ANOVA tables, are used2Or F and p values, it is not appropriate to evaluate the accuracy of algorithm 1. This is because the parameter estimates are derived from another unrelated data set (training data) and are only applied to the test data set. Thus, the statistical assumption of the underlying model is violated (e.g., in the test data set, the sum of the residuals is not zero), so R2Or the F and p values lose their statistical significance.
The accuracy of algorithm 1 in the test data set was further evaluated by examining T1DM and T2DM objects where SMBG reference readings significantly changed from the reading at the following 6 months. HbA is given in tables 3A and 3B1cIs equal to or greater than 2 units of the list of T1DM and T2DM objects. In each subject group, there were hbas of 34 subjects1cWith this variation. Algorithm 1, function F1, predicted this change 100% in both T1DM and T2 DM. The formula contains a reference HbA1c(it partially pulls the estimate back to HbA1cThe reference value of (2), the prediction force of F2 becomes small, 71% in T1DM and 85% in T2 DM. In addition to two subjects, the baseline HbA1cHbA at 6 months1cOutside the 20% interval:
TABLE 3A HbA
1c
Variations in>2-unit T1DM object
ID HBA0 HBA6 DHBA F1 F2 HIT F1 HIT F2 HIT
HBAO
6504 12.0 7.0 5.00 6.82 9.90 100.00 .00 .00
6613 10.5 6.8 3.70 8.02 9.37 100.00 .00 .00
4003 12.4 8.9 3.50 8.45 10.73 100.00 .00 .00
6204 11.0 7.5 3.50 7.29 9.45 100.00 .00 .00
3709 13.0 9.7 3.30 8.99 11.54 100.00 100.00 .00
4701 12.8 9.5 3.30 9.50 11.61 100.00 .00 .00
3614 11.9 8.7 3.20 8.24 10.30 100.00 100.00 .00
3602 11.5 8.3 3.20 7.93 9.94 100.00 100.00 .00
6008 11.3 8.3 3.00 9.30 10.53 100.00 .00 .00
3723 13.0 10.1 2.90 8.80 11.46 100.00 100.00 .00
7010 12.7 9.8 2.90 8.09 10.89 100.00 100.00 .00
6208 11.5 8.7 2.80 8.42 10.09 100.00 100.00 .00
6202 10.6 7.8 2.80 7.91 9.37 100.00 .00 .00
3924 9.9 7.2 2.70 7.71 8.72 100.00 .00 .00
8211 11.0 8.3 2.70 8.76 10.32 100.00 .00 .00
6012 9.3 6.7 2.60 7.82 8.35 100.00 .00 .00
3913 11.0 8.4 2.60 7.88 9.54 100.00 100.00 .00
6701 11.2 8.6 2.60 8.75 10.07 100.00 100.00 .00
2307 10.6 8.1 2.50 7.95 9.27 100.00 100.00 .00
3516 11.8 9.3 2.50 7.76 10.03 100.00 100.00 .00
5808 9.6 7.2 2.40 7.61 8.52 100.00 100.00 .00
2201 1.8 9.5 2.30 8.90 10.71 100.00 100.00 .00
4010 12.4 10.1 2.30 8.57 11.15 100.00 100.00 .00
6210 11.9 9.6 2.30 8.33 10.40 100.00 100.00 .00
4904 11.3 9.1 2.20 8.63 10.29 100.00 100.00 .00
6709 10.3 8.1 2.20 7.83 9.04 100.00 100.00 .00
6619 9.5 7.3 2.20 7.64 8.57 100.00 100.00 .00
3921 10.9 8.8 2.10 7.20 9.19 100.00 100.00 .00
6603 11.0 8.9 2.10 8.18 9.89 100.00 100.00 .00
7415 10.6 8.5 2.10 7.94 9.27 100.00 100.00 .00
6515 9.8 7.8 2.00 7..13 8.54 100.00 100.00 .00
3611 10.3 8.3 2.00 8.36 9.23 100.00 100.00 .00
3732 13.2 11.2 2.00 9.30 11.99 100.00 100.00 100.00
7409 10.0 8.0 2.00 7.99 9.04 100.00 100.00 .00
Table 3B: HbA
1c
Variations in>2-unit T1DM object
ID HBA0 HBA6 DHBA F1 F2 HIT F1 HIT F2 HIT
HBA0
6754 10.8 7.0 3.80 6.90 9.03 100.00 .00 .00
6361 1.3 7.6 3.70 8.51 10.20 100.00 .00 .00
6270 12.0 8.6 3.40 7.85 10.03 100.00 100.00 .00
6264 11.1 7.8 3.30 8.31 9.70 100.00 .00 .00
6355 11.8 8.6 3.20 7.99 9.90 100.00 100.00 .00
3961 10.8 8.0 2.80 9.13 9.73 100.00 .00 .00
6555 11.1 8.3 2.80 8.11 9.55 100.00 100.00 .00
8052 11.7 8.9 2.80 7.68 9.80 100.00 100.00 .00
5356 9.7 7.0 2.70 6.75 8.20 100.00 100.00 .00
3966 10.3 7.7 2.60 8.08 9.07 100.00 100.00 .00
908 9.5 6.9 2.60 7.47 8.23 100.00 100.00 .00
6554 10.7 8.1 2.60 8.16 9.42 100.00 100.00 .00
2353 11.1 8.7 2.40 8.99 9.90 100.00 100.00 .00
4064 11.3 8.9 2.4 0 7.89 9.88 100.00 100.00 .00
6351 10.1 7.7 2.40 7.92 8.63 100.00 100.00 .00
7551 12.2 9.8 2.40 9.17 11.02 100.00 100.00 .00
6358 8.4 6.1 2.30 7.00 7.32 100.00 .00 .00
3965 10.1 7.8 2.30 7.83 8.64 100.00 100.00 .00
914 11.1 8.8 2.30 9.57 10.33 100.00 100.00 .00
1603 10.2 7.9 2.30 8.02 8.88 100.00 100.00 .00
1708 10.8 8.6 2.20 7.62 9.24 100.00 100.00 .00
3761 12.4 10.2 2.20 9.13 10.86 100.00 100.00 .00
3768 11.2 9.0 2.20 8.29 9.74 100.00 100.00 .00
326 10.3 8.2 2.10 7.45 8.78 100.00 100.00 .00
109 9.3 7.2 2.10 7.70 8.18 100.00 100.00 .00
1501 11.9 9.8 2.10 8.52 10.18 100.00 100.00. .00
3964 13.7 11.6 2.10 10.08 12.65 100.00 100.00 100.00
4352 12.2 10.1 2.10 9.51 11.14 100.00 100.00 .00
7858 12.1 10.0 2.10 9.53 11.01 100.00 100.00 .00
4256 10.6 8.6 2.00 8.76 9.69 100.00 100.00 .00
4752 10.1 8.1 2.00 8.51 8.87 100.00 100.00 .00
6556 11.1 9.1 2.00 8.72 9.68 100.00 100.00 .00
6562 7.9 5.9 2.00 7.07 7.04 100.00 100.00 .00
8255 10.9 8.9 2.00 8.90 9.87 100.00 100.00 .00
In tables 3A and 3B:
ID-the ID number of the object;
HBA 0-baseline HbA1c;
HBA6——HbA1c6 month measurement of;
DHBA——HbA1cabsolute difference between baseline value and 6 month value;
F1-HbA estimated from function F11cSMBG data only;
F2-HbA estimated from function F21cUsing the previous HbA1cTesting results;
HitF1 ═ 100, HbA if F1 was at 6 months1cThe value is 20%Otherwise, it is 0;
HitF2 ═ 100, HbA if F2 was at 6 months1cWithin 20% of the value, otherwise 0; and
hit HbA0 ═ 100, if baseline HbA1cHbA at 6 months1cWithin 20% of the reading, otherwise 0.
Detailed results-training data set
This section illustrates the steps of the optimization algorithm 1. The optimization consists of two parts, (1) the assumption that the previous HbA cannot be obtained1cRead, and (2) assume that the previous HbA can be used1cPrediction of HbA1c。
We consider a number of different functions for describing SMBG data and HbA1cThe relationship (2) of (c). In terms of accuracy and simplicity of calculation, if the previous HbA is not used1cThe reading, the best function, appears to be a linear function of the mean, low and high BG indices of the SMBG readings, the other is the previous HbA1cLinear function with high B-exponent. The non-linear relationship does not improve the goodness of fit of the model and is therefore not considered for practical applications.
Training data set 1-no previous HbA 1c The coefficients of function F1 were optimized using a linear regression model. The optimal coefficients have been given in the previous section. Here, we give data on the goodness of model match:
multiple R (multiple R) 71461
R square, 51067
Analysis of variance
DF mean squared sum
Regression 3154.5709751.52366
Residual 90148.109031.64566
31.30889 significance degree F0000
Analysis of the residuals for this model showed a normal distribution close to the residuals (see fig. 11). The SD of the residuals is 1.2 (by definition their mean is 0). Therefore, we can accept that the model describes the data well.
Training data set 2-previous HbA 1c Again, the coefficients of function F3 were optimized using a linear regression model. The optimal coefficients have been given in the previous section. Here, we give data on the goodness of model match:
multiple R, 86907
R square 75528
Analysis of variance
DF mean squared sum
Regression 438.702379.67559
Residual 5412.54000.23222
41.66522 significance degree F0000
Analysis of the residuals for this model showed a normal distribution close to the residuals (see fig. 12). The SD of the residuals was 0.47 (by definition their mean value is 0). Therefore, we can accept that the model describes the data well.
In addition, no and previous HbA were compared1cCan conclude if the previous HbA is included in the calculation1cThe final model ratio is then based on R2And from residual error much better.
However, as we see in the previous section, in the unrelated data set, the previous HbA1cThere is no contribution to the overall accuracy of the prediction, in some cases HbA1cThe change is significant and even fast, hampering the ability of the algorithm. Therefore, we can conclude that even the previous HbA1cMay be better from a statistical standpoint, but may not have sufficient practical utility for correcting future meter reading inputs. We are also unaware of HbA1cTime interval between assay and SMBG detection (profile), but still HbA1cUseful for input. Perhaps, this depends on the HbA in the time period1cChanges in (b) -2 HbAs we see in the previous section1cThe change in the unit makes HbA earlier1cThe reading is completely useless.
SMBG/HbA 1c Ratio of
We now provide an alternative approach to improve the statistical accuracy of model matching and maintain considerable clinical applicability. As can be seen, the average of the 45-day SMBG readings vs HbA1cIs a measurement with the most perfect normal distribution (as can be confirmed by Kolmogorov-Simov experiments) and, most importantly, distinguishes three groups of subjects with ratios of<1.0, 1.0-1.2 and>1.2. the first two groups can each interpret about 40% of the objects and the third group can interpret about 20% of the objects. This is valid for both T1DM and T2DM and was observed in both the training data set as well as the test data set. Furthermore, this ratio appears to be particularly stable over time, and thus may be a measure reflecting the patient's SMBG habits (e.g., if SMBG is performed primarily when BG is low, the final average will underestimate HbA1cAnd thus the corresponding ratio will be less than 1.0). Note that this is only a hypothesis and cannot be confirmed with the available data, and we have performed some analysis that seems to confirm that the proportion of each person at a certain point in time is known to some extent. This seems to be equivalent to knowing the previous HbA1cAnd possibly equivalent to data input, but the application of this ratio to the previous HbA1cIs very different. Except for being directly used forIn addition to the predictive formula, the ratio is also used to classify patients for which one of three different predictive formulas is applied. These new formulas do not directly include HbA1cAnd therefore do not suffer from the inclusion of HbA1cThe resulting inertia (inertia). In addition, the mean HbA between the three groups defined by this ratio1cThe difference is not large and is not related to this ratio, so the reason why the ratio is different among different people must be different from HbA1cIs not relevant.
If we first divide the subjects into 3 groups according to their ratio and perform regression separately in the training data set, the fit of the regression model increases significantly: (1) in group 1 (ratio)<1.0), we obtained the multiple R ═ 0.86, R20.73; (2) in group 2 (ratio 1.0-1.2), the degree of matching is best, R ═ 0.97, R20.94, and (3) in group 3 (ratio)>1.2), R is 0.69, R has the worst matching degree20.47. Since all three regression models do not include the previous HbA1cTherefore, we conclude that the matching goodness increases significantly for about 80% of the objects, remains the same for the remaining 20% of the objects, and objects whose matching goodness will deteriorate can be identified in advance.
Further, the test data groups were divided into 3 groups according to the ratio of the objects. The prediction accuracy we obtained was similar to the accuracy previously obtained (tables 4A and 4B):
table 4A accuracy of algorithm 1 in T1DM (N254 objects)
| Ratio of<1.0 | The ratio of 1.0 to 1.2 | Ratio of>1.2 | |
| AERR | 0.70 | 0.63 | 0.74 |
| PERR(%) | 7.8 | 7.4 | 7.9 |
| HIT 20(%) | 93.8 | 93.0 | 95.5 |
| HIT 10(%) | 68.8 | 73.4 | 72.7 |
| MISS 25(%) | 3.1 | 2.6 | 0.0 |
Table 4B accuracy of algorithm 1 in T2DM (N319 objects)
| Ratio of<1.0 | The ratio of 1.0 to 1.2 | Ratio of>1.2 | |
| AERR | 0.63 | 0.68 | 0.89 |
| PERR(%) | 7.6 | 7.8 | 8.8 |
| HIT 20(%) | 97.4 | 95.0 | 95.3 |
| HIT 10(%) | 67.2 | 65.3 | 57.7 |
| MISS 25(%) | 0.0 | 1.7 | 0.0 |
In short, knowledge of SMBG/HBA of each object1cRatios and corresponding separate estimates appear to improve the statistical performance of the model without sacrificing clinical accuracy.
Other hypotheses and ideas tested
We tested a number of other hypotheses and ideas that proved to be useful at least to facilitate and more centrally analyze the data collected by example No. 2. The brief results are illustrated below:
(1)HbA1cmost correlated (correlated) with SMBG readings taken in the afternoon, 12 pm to 6 pm, and least correlated with fasting SMBG readings (4a.m. -8 a.m.). However, not only collecting post-prandial SMBG readings will increase HbA1cThe prediction of (b), conversely, if the contribution of all hours (relatively small but significant) of the whole day is ignored, the prediction becomes worse. If readings from other hours of the day are weighted differently, it is possible to increase HbA1cBut this improvement is not sufficient to offset the additional complexity of the model;
(2) in T2DM, HbA1cThe relationship with the average SMBG is stronger than in T1DM, even though HbA of both groups1cAre matched with each other. From direct correlation, the coefficient in T1DM was about 0.6 and the coefficient in T2DM was about 0.75 throughout the study;
(3) according to SMBG and HbA1cThe different weighting of the SMBG readings by the time between assays (e.g., the closer the result is to the middle, the higher the weighting) does not yield a different weighting of the HbA1cBetter prediction;
(4) inclusion of demographic variables such as age, duration of diabetes, gender, etc., did not increase HbA1cPredicting;
(5)HbA1cthe simplest possible linear relationship to the average SMBG (measured in mmol/l) is given by the following equation: HbA1c0.41046 ANG BGMM + 4.0775. Although statistically inferior compared to F1 and F2,but HbA given by this formula1cEstimates were to have an accuracy of about 95% in T1DM and T2DM (based on HbA)1cAssay bias is less than 20%), and may be useful if combined in one meter by calculating the low and high BG indices (however, not calculating the low BG index is not able to achieve a prediction of hypoglycemia, so the formula may only be useful for meters that include algorithm 1 but not algorithms 2 and 3).
Algorithm 2 evaluation of SH Long-term Risk
Example No.1 provides, but is not limited to, an extension of algorithm 2, including estimating the probability of biochemically significant hypoglycemia (BSH, defined as BG reading < ═ 39mg/dl) or biochemically moderate hypoglycemia (BMH, defined as 39mg/dl < BG < ═ 55mg/dl) in an individual. In addition, we planned to evaluate whether algorithm 2 predicted a better incidence of SH at night (midnight to 7:00am) than during daytime SH.
Algorithm 2 is a classification algorithm. That is, it classifies a subject's future BSH or MSH into a particular risk range based on the subject's SMBG data. To be as close as possible to the real future application of algorithm 2, we proceed as follows:
(4) firstly, obtaining a plurality of optimal classification variables, optimal classification ranges, optimal duration and optimal SMBG frequency from a training data set 1;
(5) the test data set was then divided into two parts, the first 45 days and the remaining data part. Applying the optimal parameters of algorithm 2 to the first 45-day portion of the data to predict the BSH and MSH of the second portion of the data using the probability estimates of the future BSH or MSH;
(6) the accuracy of algorithm 2 was evaluated in detail using only the test data.
The separation of training from test data sets allows us to state that the estimation accuracy of algorithm 2 can be generalized to T1DM or any other data of the T2DM patient. Moreover, because Amylin data is collected from subjects undergoing intensive therapy, we can speculate that algorithm 2 is tested and proven to be effective in subjects with a changing and increasing risk of hypoglycemia.
Result summarization
● estimate the optimal SMBG data collection period required for future BSH or BMH probabilities to be 40-45 days. The optimum frequency of SMBG was 3-4 readings/day. The large number of readings does not lead to a significant increase in the predictive power of algorithm 2. If the number of readings per day is less than 3, the predictive power is reduced. However, the requirement is to refer to the average reading per day over a 45 day observation period, and does not mean that 3-4 readings need to be performed per day;
● the relationship between predictor variables and future SH and MH is strictly non-linear. Thus, the linear method cannot be used to optimize the prediction, although R can be obtained by a direct linear model250% (in contrast, the best outcome for DCCT is predicting 8% of future SH);
● predicting nighttime SH alone is generally weaker than predicting daytime SH;
● define the hazard range for 15 future BSH and BMH. The best separation of ranges is obtained only from the low BG index, although combinations of low BG index and other variables can work similarly well;
● although the frequencies of BSH and BMH differ between T1DM and T2DM (see Table 5), the conditional frequency does not differ between T1DM and T2DM when a hazard range is given. This allows a unified approach to predicting SH and MH risk;
● future empirical probabilities were calculated and compared for 15 risk ranges. All comparisons have high significance, with p's < 0.0005.
● these empirical probabilities are approximated by a two-parameter Weibull distribution (two-parameter Weibull distribution) that yields the theoretical probabilities of future BSH and BMH in each risk range.
● the matching of these approximations is very good-all with a determination factor of more than 85% and some up to 98% (see FIGS. 1-5 and 9-10).
Detailed results-test data set
Determining personal risk range of SH/MHA total of 600 subjects' data were used for this analysis. A low BG index (LBGI) was calculated for each subject from his/her previous 45 days of SMBG data collection. The LBGI is then classified into one of the 15 best risk ranges (the range of the variable RCAT is 0-14), as found in the training data set 1. These risk ranges are defined by the following inequalities:
if(LBGI≤0.25),RCAT=0
if(0.25<LBGI≤0.5),RCAT=1
if(0.50<LBGI≤0.75),RCAT=2
if(0.75<LBGI≤1.00),RCAT=3
if(1.00<LBGI≤1.25),RCAT=4
if(1.25<LBGI≤1.50),RCAT=5
if(1.50<LBGI≤1.75),RCAT=6
if(1.75<LBGI≤2.00),RCAT=7
if(2.00<LBGI≤2.50),RCAT=8
if(3.00<LBGI≤3.50),RCAT=9
if(3.50<LBGI≤4.00),RCAT=10
if(4.00<LBGI≤4.50),RCAT=11
if(4.50<LBGI≤5.25),RCAT=12
if(5.25<LBGI≤6.50),RCAT=13
if(LBGI>6.50),RCAT=14
observation frequencies of BSH and BMHFor each subject, the occurrence of BSH and BMH indicated by SMBG was counted in 1 month, 3 months, and 6 months after the first 45 days of data collection. Table 5A shows that 0, B, is observed in T1DM,>=1、>=2、>Table 5B gives the same data observed in T2DM for 3 BSH and BMH frequencies:
TABLE 5A BSH and BMH frequencies observed in T1DM
Table 5B: BSH and BMH frequencies observed in T2DM
BSH and BMH at nightAccounting for approximately 15% of all events indicated by SMBG. The correlation between the nighttime event and all predictor variables is weak in the training data set. We conclude that the target prediction of nighttime events is not valid.
Empirical probability of future BSH and BMHWe calculated the specific empirical probabilities of the future BSH and BMH for each of the 15 hazard ranges. These probabilities include: (1) probability of at least one occurrence of BSH or BMH within 1 month, 3 months, and 6 months thereafter; (2) probability of at least secondary BSH or BMH occurring within 3 and 6 months thereafter; and (3) the probability of at least three occurrences of BSH or BMH within 6 months thereafter. Of course, it is possible to calculate any other combined probability as desired.
The most important conclusion drawn from this analysis is that, given a risk range, the probability of BSH and BMH in the future does not differ significantly between T1DM and T2 DM. This allows a unified approach to empirically and theoretically predict these probabilities in T1DM and T2 DM. As a result, the data for T1DM and T2DM patients were combined for analysis as follows.
FIGS. 1-5 and 9-10 show scatter plots of 6 calculated empirical probabilities plotted against 15 risk ranges. The empirical probability of BSH is represented by black triangles and the empirical probability of BMH is represented by red squares.
The empirical probabilities for all groups were compared using univariate ANOVA over 15 risk ranges and all p levels were less than 0.0005. Thus, we observed that the difference between BSH and BMH events was highly significant over different risk ranges.
Theoretical probability of BSH and BMH in the futureTo be able to estimate the probability of future BSH and BMH using a direct formula, we approximated the empirical probability with a two-parameter Weibull probability distribution. The Weibull probability function is given by the following equation:
F(x)=1-exp(-a,xb),x>0; otherwise F (x) is 0
Statistical attentionThe parameters a and b are greater than 0 and are referred to as scale parameters and shape parameters, respectively. In the specific example of b ═ 1, the Weibull distribution becomes exponential. This distribution is often used in engineering problems because the distributions of randomly occurring technology failures are not completely independent of each other (if the failures are completely independent, they will form a Piosson process, which will be explained in an exponential distribution, e.g., b ═ 1). The situation here is quite different-we need to describe the distribution of events (failures) that are not completely independent and tend to cluster, as demonstrated by our previous studies.
Each set of empirical probabilities is approximated using the theoretical formula given above. The parameters are estimated using a non-linear least squares method (initial parameter estimation given by a linear log-log model). The match goodness of each model was used to determine a coefficient (D)2) Evaluation was performed. R in the significance and Linear regression of the statistics2Similar, but R2Cannot be applied to non-linearityAnd (4) modeling.
Model matching is given in fig. 1-6, with black lines for the probability of BSH and dashed lines for the probability of BMH. On each figure we give an estimate of the parameters of the corresponding model, so we give a direct formula for calculating the frequency of 0, > -1, > -2, > -3 BSH and BMH occurring within 1, 3 and 6 months after the initial SMBG. Some of these formulas or variants thereof can be included in the monitoring device or software as indicators of SH and MH risk.
D is given below each figure2The value serves as an indicator of the accuracy of the approximation. All values are above 85%, some reach 98%, which confirms that the approximation is very good and that theoretical rather than empirical probabilities can be used in future studies/applications.
The theoretical probability of one or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 1:
P(MH>=1)=1-exp(-exp(-1.5839)*Risk**1.0483)
P(SH>=1)=1-exp(-exp(-4.1947)*Risk**1.7472)
figure 1 gives the empirical and theoretical probability of moderate (dashed line) and severe (black line) hypoglycemia within 1 month after SMBG estimation within each of the 15 risk levels defined by the low BG index. Since the models are non-linear, their coefficients D are determined2Estimate the goodness of match, D2Is R in the linear model2The analog of (1). The coefficients and their square roots are determined as follows:
SH model D2=96%,D=98%
MH model D2=87%,D=93%
The theoretical probability of one or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 2:
P(MH>=1)=1-exp(-exp(-1.3731)*Risk**1.1351)
P(SH>=1)=1-exp(-exp(-3.2802)*Risk**1.5050)
figure 2 gives the empirical and theoretical probability of moderate (dashed line) and severe (black line) hypoglycemia within 3 months after SMBG estimation within each of the 15 risk levels defined by the low BG index.
The coefficients and their square roots are determined as follows:
SH model D2=93%,D=97%
MH model D2=87%,D=93%
The theoretical probability of one or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 3:
P(MH>=1)=1-exp(-exp(-1.3721)*Risk**1.3511)
P(SH>=1)=1-exp(-exp(-3.0591)*Risk**1.4549)
figure 3 gives the empirical and theoretical probability of moderate (dashed line) and severe (black line) hypoglycemia within 6 months after SMBG estimation within each of the 15 risk levels defined by the low BG index.
The coefficients and their square roots are determined as follows:
SH model D2=86%,D=93%
MH model D2=89%,D=95%
The theoretical probability of two or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 4:
P(MH>=2)=1-exp(-exp(-1.6209)*Risk**1.0515)
P(SH>=2)=1-exp(-exp(-4.6862)*Risk**1.8580)
figure 4 gives the empirical and theoretical probability of developing moderate (dashed line) and severe (black line) hypoglycemia 2 or more times within 3 months after SMBG estimation within each of the 15 risk levels defined by the low BG index.
The coefficients and their square roots are determined as follows:
SH model D2=98%,D=99%
MH model D2=90%,D=95%
The theoretical probability of two or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 5:
P(MH>=2)=1-exp(-exp(-1.7081)*Risk**1.19555)
P(SH>=2)=1-exp(-exp(-4.5241)*Risk**1.9402)
figure 5 gives the empirical and theoretical probability of developing moderate (dashed line) and severe (black line) hypoglycemia 2 or more times within 6 months after SMBG estimation within each of the 15 risk levels defined by the low BG index.
The coefficients and their square roots are determined as follows:
SH model D2=98%,D=99%
MH model D2=89%,D=95%
The theoretical probability of three or more moderate or severe hypoglycemic events occurring is given by the formula shown in fig. 9:
P(MH>=3)=1-exp(-exp(-2.0222)*Risk**1.2091)
P(SH>=3)=1-exp(-exp(-5.5777)*Risk**2.2467)
figure 10 gives the empirical and theoretical probability of developing moderate (dashed line) and severe (black line) hypoglycemia 3 or more times within 6 months after SMBG estimation within each of the 15 risk levels defined by the low BG index.
The coefficients and their square roots are determined as follows:
SH model D2=97%,D=99%
MH model D2=90%,D=95%。
Detailed results-training data set
The training data set included SMBG data and a monthly recording of severe hypoglycemia. In contrast to test data sets where cutoff (cutoff) BG values were used to determine BSH and BMH, monthly records included reports of severe symptoms defined as unconsciousness, coma, inability to self-treat, or significant cognitive impairment due to hypoglycemia. Within 6 months after SMBG, study subjects reported an average of 2.24 such events per person, with 67% of subjects reporting no such events. From a statistical point of view, this only significantly biases the distribution of SH events and is not suitable for applying a linear approach. Although linear regression can be used to estimate the relative distribution of various variables to predict SH, it cannot be used to construct the final model. We performed three analyses as follows:
(1)not knowing the SH historyNeglecting any historical knowledge of SH, we derive from the baseline HbA1cAnd SMBG features such as average BG, low BG index and varying estimated BG risk, predict future SH by regression (all variables are already described in the initial disclosure of the invention). HbA as previously discovered1cAnd the average BG does not contribute to the prediction of SH. The final regression model included changes in the low BG index and BG risk and had the following goodness of match:
multiple R, 61548
Square of S37882
Analysis of variance
27.74772 significance degree F0000
-
| Variables of | B | SE B | β | T | Degree of significance T |
| LBGI | 4.173259 | .649189 | 2.104085 | 6.428 | .0000 |
| Ratio of | -5.749637 | 1.091007 | -1.724931 | -5.270 | .0000 |
| (constant number) | -2.032859 | .790491 | -2.572 | .0117 |
(1)Knowing the previous SHWhen we included the number of SH events in the previous year, as reported in the screening questionnaire, this variable could account for an additional 11% of future SH variations:
multiple R70328
Square of S49461
Analysis of variance
29.35999 significance degree F0000
-
| Variables of | B | SE B | β | T | Degree of significance T |
| LBGI | .337323 | .704286 | .375299 | 4.541 | .0000 |
| LDR | -4.350779 | 1.036380 | -1.305264 | -4.198 | .0001 |
| RLO | 3.134519 | .631684 | 1.580371 | 4.962 | .0000 |
| (constant number) | -2.136619 | .717334 | -2.979 | .0037 |
(3) Without knowing the number of previous SH, and knowing whether someone had or had not had SH previously, we can account for 45% of future SH variations with the SMBG variable only;
(4) finally, two separate linear models can account for 55% of daytime SH variation and 25% of nighttime SH variation. The direct correlation of all predictor variables to the nighttime SH was also weak. Nighttime events account for 30% of total SH.
We conclude that the linear predictive model is able to directly account for approximately 40-50% of future SH variations. However, this model is not well balanced in terms of its residual error (due to the highly biased distribution of the number of SH events in diabetic patient demographics). Statistical evidence is given by the normal probability plot of fig. 13, which shows significant deviation of the standard residuals from their predicted values.
Therefore, we have adopted another approach to predicting SH, using their SMBG data to classify subjects into various risk ranges and estimate the probability of subsequent SH in those ranges. We tried various classification models, maximizing the difference between hazard ranges and trying to obtain the maximum hazard estimation resolution (based on the maximum number of ranges).
The best results were obtained by classifying only with the low BG index to obtain 15 hazard ranges (given in the beginning of the previous section).
In addition to the optimal separation between ranges, this result has other advantages: (1) no prior SH history knowledge is required; (2) the calculations are relatively simple and do not require tracking of time variables, such as the rate of change of BG, and (3) classification appears to be equally applicable to T1DM and T2DM patients (without any prior knowledge of SH).
Algorithm 3 evaluation of short-term risk of hypoglycemia
Example No.1 gives, but is not limited to, optimization of algorithm 3:
(1) using baseline Long-term Risk (from Algorithm 2) and HbA1c(from Algorithm 1);
(2) risk criteria/thresholds for hypoglycemia alarms;
(3) SMBG frequency;
(4) if an increased risk of hypoglycemia is detected and there has been a period of no SMBG, then whether a hypoglycemia alarm should be issued, and
(5) contributions of demographic variables, such as severe hypoglycemia history.
Introduction to the design reside in
Unlike Algorithm 1 and Algorithm 2, which have a long-term development history, Algorithm 3 deals with problemsUntil recently considered impossible. Indeed, it Is still generally accepted that it Is impossible to predict future BG values (especially hypoglycemia) from previously known values (Bremer T and Gough DA. are able to predict blood glucose from previous values,48: 445-451). Our previous work, reported in one manuscript available from Lifescan limited and set forth in detail in the present disclosure, questioned this general opinion. To explain the basis of this challenge and to clarify the principles underlying algorithm 3, we include the following paragraphs.
We quantify the "philosophy" of diabetes characteristics:
according to the endocrine system under study, the hormonal interactions are controlled by a dynamically regulated biochemical network with a complex or simple structure consisting of major nodes and pipes. Diabetes can disturb the network that regulates insulin-glucose kinetics at various levels. For example, in T1DM, the natural production of insulin is completely eliminated, whereas in T2DM, the use of insulin inside cells is hindered by more robust insulin resistance. In T1DM (also common in T2DM), some form of external insulin replacement is required, which makes the regulatory system vulnerable to defective external factors including the timing and dosage of boluses and insulin injections, food consumed, physical activity, etc. This often leads to extreme BG bias with hypoglycemia and hyperglycemia. In many, but not all cases, hypoglycemia triggers an endocrine response, referred to as counter-regulation. Thus, in the field of mathematics, fluctuations in BG over a period of time are a measurable result of complex kinetic system activities influenced by a number of internal and external factors. However, according to well-known kinetic system theory, a purely deterministic system develops to show random macroscopic behavior as the complexity of regulation increases. Thus, the BG fluctuations observed at the human level will be nearly deterministic over a short period of time (minutes), whereas in the long term the fluctuations will be nearly random, which includes extreme transitions, such as SH events. Thus, stochastic modeling and statistical inference are best suited to analyze the system over a longer period-the paradigm adopted by algorithm 1 and algorithm 2, which predicts the range of values and the probability of an event after a particular observation period using the means we originally proposed, such as LBGI and HBGI. The fluctuations of BG in the short term can be modeled and predicted with a deterministic network, which will be implemented in future intelligent insulin delivery devices that are capable of continuous detection.
Algorithm 3 works in the middle time range of hours to days, so requires a combination of statistical inference and deterministic modeling. The former is used to estimate the baseline risk of an individual SH, while the latter is used to dynamically track individual parameters and make predictions before an SH event occurs. When executed in a device, the algorithm 3 will work as follows:
(1) the device collects some reference information of the research object and establishes personal reference parameters;
(2) then, the device begins to track a certain set of features of the SMBG data;
(3) the device has decision rules that determine when to establish a flag of impending SH and when to lower the flag when the data indicates reduced risk;
(4) when the flag is established, we consider that the subject will receive an SH alarm within the next 24 hours (predicted time).
This dynamic prediction creates a theoretical problem both at the level of model parameter optimization and at the level of accuracy assessment of the optimal method. We begin to address the second problem because it is most important to understand the role of algorithm 3.
Algorithm 3 precision evaluationAlthough algorithm 1 and algorithm 2 use static prediction and the criteria for evaluating these algorithms are theoretically evident-the predicted values are better, the criteria for optimization is no longer straightforward for algorithm 3. This is because of the increased percentage of predicted SH eventsWhile we inevitably increase the number of "established flags", this in turn increases the number of potential "false alarms". The problem is further complicated because "false alarms" are not clearly defined. In its pure sense, a false alarm refers to the flag being established but no SH event subsequently occurring. However, SH can be avoided if a person perceives symptoms and takes appropriate action. Thus, even though the biochemical potential of SH may be present, the event may not occur. To solve this problem, we have adopted the following optimization criteria:
(1) maximizing the prediction that SH will occur within 24 hours;
(2) ratio R of the duration of "marking increase" and "marking decreaseudAnd (4) minimizing.
While the first of these two points is clear, the second point may require additional explanation. From the perspective of performing algorithm 3 in the meter, each time the SMBG is determined, the meter determines whether or not to establish a flag of impending SH. When a flag is established, it may last for a period of time (with several subsequent SMBG readings) until a decision is made to lower the flag. Thus, we will have alternating "flag up" and "flag down" processes that change at the SMBG point. Reference to the ratio R of the upper points (2)udIs the average time a person's flag is raised divided by the average time the flag is lowered.
Our previous best results given in the disclosure of the invention are predicting SH events in 44% of 24 hours, and Rud1:7, i.e. 1 day high risk alarm alternating with 7 days no alarm. Since we assume that the alarm period is at least 24 hours at the time, the algorithm is optimized so that the frequency of flag set up does not exceed 1 time per week. This ratio is acceptable if the analysis is done with data of objects having a high ratio of SH events.
In example No.1 of the present study, we had to use the same data set perfecting algorithm 3, since there was no other data available that included both SMBG records and SH records. We also use similar criteria to evaluate the accuracy of algorithm 3. However, we basically changed everything else. The tracking of data, parameter estimation, overall thresholds and decision rules are no longer the same. These changes are due to a new idea that the body will have some "wear" on the reserve (reserve) for the back regulation before SH and this wear can be tracked with SMBG data. The exact implementation of this idea is described in the "decision rules" section. Since the decision rule includes a continuous criterion and a somewhat artificial abort, there are multiple solutions and we have chosen the best one for further study. However, based on the representation of these results, we can decide to select another solution to be executed in the future application of algorithm 3.
Result summarization
First, it is important to note that all of the results given below are just beyond statistical significance. As we will see in several examples in the next section, the observed differences were always very significant (p-values below any possible significance level). The idea of algorithm 3 is to predict the occurrence of SH events on an individual basis. The results were:
(1) the minimum baseline observation period is 50 SMBG readings taken over a period of about two weeks at a frequency of 3-4 readings per day. Each object is then classified into one of two risk groups, the latter using different decision rules;
(2) from the six month data we have found that it is sufficient to perform the assignment of the group at the start of the observation. Therefore, we assume that approximately every six months the meter will re-evaluate its own group assignments using 50 readings;
(3) the optimal delay for SMBG tracking is to take 100- & 150 readings at a frequency of 3-4 readings per day. In other words, the best decision criteria is based on calculations using all 150 readings in the meter memory. This is achieved by modeling the storage capacity of the ONE TOUCH ULTRA. Generally, good results can be obtained using a delay of only 20 readings taken over a week, but longer delays can yield better predictions;
(4) the decision rule is according to a new calculation procedure that uses the "provisional average value" to calculate the low BG index and other relevant parameters of the tracked object. We have designed specialized software to execute the program and process the data we can get. From a programming point of view, the code required to perform the procedure is only about 20 lines, including the calculation of the LBGI;
(5) we investigated a number of decision rules (using various parameters). Ignoring the SMBG frequency, the 24 hour SH prediction obtained by these rules is from 43.4%, Rud1:25 to 53.4%, Rud1: 7. Thus, the prediction of SH within 24 hours was improved by 10% compared to our previous results;
(6) as the best solution for further study, our decision rule chosen predicts SH within 50% of 24 hours, and Rud1: 10. The following results refer to the results of this optimal solution under different conditions:
(7) the optimum frequency for SMBG is 4 readings per day. If this frequency is reached, the prediction of SH increases to 57.2% in 24 hours with the same Rud1: 10. Other SMBG frequencies have also been investigated and reported;
(8) if we extend the prediction period to 36 or 48 hours, the prediction of SH increases to 57% and 63%, respectively, with the same Rud=1:10;
(9) The prediction of SH can be significantly increased using the reference information. In fact, the 10% increase over the previous version of algorithm 3 is due entirely to the use of the reference tracking. However, this baseline tracking is now modeled as self-correcting over a two week period for the meter without using any additional input from the patient;
(10) personal/demographic information, e.g. calendar of SHHistory or previous HbA1cNo contribution to better short-term prediction of SH;
(11) it is not appropriate to establish a flag whenever there is no SMBG activity for a long time. Only the number of times the meter raises an impending SH alarm is the number of uses. This is because the main part of SH prediction is based on the recurrence (clustering) of very low BGs. Estimates of this recurrence are given in the abstract prepared for the 2002 year 6 month ADA meeting (Kovatchev et al. periodically occurring hypoglycemia and Severe Hypoglycemia (SH) in T1DM patients with a large history of SH) (see appendix).
Detailed description of data processing
The meter stores the SMBG readings with the date and exact time (hours, minutes, seconds) of each reading. Thus, in the training data set, we have a time-sequential SMBG record for each object. A total of 75,495 SMBG readings were downloaded from participants' storage meters during the study (average 4.0 ± 1.5 per person per day). From the subject's monthly notes, we obtained the date and time of the SH event that occurred. Subjects reported 399 SH events (4.7 ± 6.0 per person). 68 participants (80% of them) experienced one or more SH events. These subjects were not different from those who did not experience SH (the remaining 20% of subjects) based on their demographic characteristics.
Data pre-processing:
We developed specialized software for data preprocessing. This involves (1) combining the stored meter data for each subject with a continuous 6-8 month sequence of BG readings, and (2) matching the SH record for each subject to the sequence by date and time. The latter is performed by calculating, for each SMBG reading, the time (hours/minute) to the last SH event and the elapsed time since the last SH event. It is therefore possible to (1) time periods of 24 hours or 48 hours before or after each SH event and (2) time periods between SMBG readings. Because of the nature of SH (coma, unconsciousness), there is no SMBG in SH, so for the purpose of algorithm 3, SH events do not include biochemically significant hypoglycemia for algorithm 2. The minimum time interval between SH and the last previous SMBG reading on average per SH event was 5.2 ± 4.1 hours; there were 29 SH events (7% of them) with SMBG readings within the previous 15 minutes. For each SH event, we counted the number of SMBG readings taken within 24h, 36h, 48h, and 72h before the event.
Calculation and self-correction of reference hazard values:
A low BG index is calculated for each subject based on his/her first SMBG reading. It has been determined that the minimum reading required to calculate a baseline LBGI is 50 collected in about 2 weeks. Thus for each new meter, we need to expect a self-calibration period of the first two weeks during which the meter will scan its owner for the overall risk of SH. After the initial period, people are assigned to one of two risk groups: low moderate risk (LBGI ≦ 3.5, LM group) or medium high risk (LBGI >3.5, MH group). Our test data shows that more accurate grouping is unnecessary. This grouping allows different decision rules to be used in the LM and MH groups, resulting in an improvement of the hit rate of the algorithm by about 10% compared to its initial hit rate given in the present disclosure.
Using the test data, the benchmark hazards do not need to be recalibrated. We can therefore assume that if a person experiences no change in treatment, a recalibration is performed approximately every six months. This is consistent with the results of algorithm 2, which shows that the long-term prediction of SH is very accurate 6 months after the initial observation period.
However, if a person experiences rapid changes in his glycemic control, more frequent recalibration may be required. The re-calibration decision may be automatic and may be based on an increasing difference between the observed operational risk value (see below) and the reference LBGI. However, the available data does not allow us to elucidate this problem, since the risk of hypoglycemia in the subject we observe does not change significantly.
Calculating SMBG parametersAfter the preprocessing step, we design another software to calculate the SMBG parameters for predicting impending SH. The software includes:
(1) a Low BG hazard value (RLO) is calculated for each BG reading, which is achieved by the following code (where BG is measured in mg/dl and the coefficients will be different if the unit is mmol/l):
scale=(ln(bg))**1.08405-5.381
risk=22.765*scale*scale
if(bg_1≤112.5)then
RLO=risk
else
RLO=0
endif
(2) for each SMBG reading with sequence number n, BG (n), a running value of lbgi (n) and another statistic, sbgi (n), which is the standard deviation of the low BG risk value, are calculated. These two parameters are calculated with some delay (k) after each SMBG reading, including, for example, the reading, bg (n), and (k-1) readings taken before bg (n).
(3) The calculation of lbgi (n) and sbgi (n) uses a new temporal procedure (recursive coding) based on:
initial values at n-k (or exactly at maximum (1, n-k) to account for meter readings with ordinals less than k):
LBGI(n-k)=rlo(n-k)
rlo2(n-k)=0
for any successive iteration between n-k and n, the value of j:
LBGI(j)=((j-1)/j)*LBGI(j-1)+(1/j)*RLO(j)
rlo2(j)=((j-1)/j)*rlo2(j-1)+(1/j)*(RLO(j)-LBGI(j))**2
after the cycle is complete, we obtain the value of LBGI (n), and calculate
SBGI(n)=sqrt(rlo2(n))
Since the maximum value n is 150 for the ONE TOUCH ULTRA meter, the optimum delay k is found in the range of k 10 to k 150. Although the difference in performance is not significant, the optimal delay is determined to be k 150 (see e.g., next section).
Decision ruleAt each SMBG reading, the program determines whether to establish a flag warning of an impending SH. If the flag has been established, the program determines whether to lower it. These decisions are based on three threshold parameters, α, β, γ, which operate as follows:
for subjects at low to moderate risk (LM group):
FLAG=0
if(LBGI(n)≥α and SBGI(n)≥β)FLAG=1
if(RLO(n)≥(LBGI(n)+γ*SBGI(n)))FLAG=1
for objects of the medium-high risk group, only the second if statement works. In other words, a flag is established (i.e., equal to 1) if both the running value of lbgi (n) and its standard deviation sbgi (n) exceed certain thresholds, and a flag is established if the current value of low BG risk rlo (n) exceeds the value of lbgi (n) plus γ standard deviation.
A heuristic explanationThe values of LBGI (n) and SBGI (n) reflect the slower change in the risk of hypoglycemia-several days of SMBG may be required to change these values significantly. Since the higher the lbgi (n) mean, the more frequent and extreme recent hypoglycemia, we can conclude that lbgi (n) and sbgi (n) reflect the continued loss (or lack of replenishment) of the counter-regulatory reserve over a period of days. In addition, SBGI (n) is an indicator of system stability-larger SBGI (n) indicates subject BG fluctuationsAnd therefore the control system becomes unstable and susceptible to extreme anomalies. Thus, the first logic expression reflects the idea that SH occurs whenever the counter-regulatory defense is exhausted and the control (external or internal) becomes unstable. The second logical expression illustrates a sharp change in the low BG risk, which is flagged whenever the current low BG risk value is suddenly greater than its running average. In fact, for objects of the medium-high risk group, only the second logical expression is relatively consistent with the final "permanently depleted" and "permanently unstable" states of these objects. Because these subjects run low BG values continuously, and their BG is unstable, any acute hypoglycemic event may provoke SH. In general, a severe hypoglycemic marker is established either after a low unstable BG period or after an acute hypoglycemic event that deviates significantly (within the risk space) from the most recent running risk average (which may be high early). These SH events, which did not have any alarm signal before, still cannot be interpreted with the present algorithm. In table 5C below, we present a sample output that explains the effect of algorithm 3 on multiple objects:
table 5C: sample outputs that explain the effect of algorithm 3 on multiple objects:
each row of the output gives an SMBG reading, or an SH event (no reading). ID is the ID number of the subject, BG is the BG level in mg/dl, and SH is 1 when an SH event occurs. If the algorithm 3 determines to establish FLAG, FLAG is 1; TIME is the TIME of the last SH event in hours.
Optimization of program delay for temporary methodsIn previous publicationsWe report that the mean BG level decreases and the BG variance increases over a 48 to 24 hour period prior to SH. During the 24 hour period before SH, the mean BG level further decreased, the variance of BG continued to increase, and LBGI increased dramatically. Over a 24 hour period after SH, the average BG level normalized, while BG variance still increased dramatically. The mean and variance of BG returned to baseline levels within 48 Hours after SH (see Kovatchev et al, Blood Glucose perturbation Measurable before and 48 Hours after severe hypoglycemic events in Type al.1 Diabetes mellitus (episodies of Severe Hypoglycemia in Type 1 Diabetes mellitus advanced, and Followed, within 48 Hours by measurably disturbed in Blood Glucose).J of Clinical Endocrinology and Metabolism,85:4287-4292, 2000). We now use these observations to optimize the delay, k, of the temporal procedure adopted by algorithm 3 according to the mean values of lbgi (n) and sbgi (n) observed within 24 hours before SH. In short, the delays used to calculate lbgi (n) and sbgi (n)) were chosen to maximize these measurements within 24 hours before SH, as compared to the rest of the study, but not to include the period after SH during which the system is still in imbalance. The optimum delay was found to be k 150. Tables 6A and 6B show various values for lbgi (n) and sbgi (n) for parameter k and the mean values for the two subject groups, the low intermediate risk group and the medium high risk group. Obviously, the difference between the individual values of k is not large, so that practically any value of k.gtoreq.10 is suitable. However, from the current data, we recommend k 150, and all further calculations use this delay. The recommendation again decreases according to the increase in variance age delay values of LBGI (n) and SBGI (n), which can be reflected by the following larger t-values:
TABLE 6A LBGI (n) for 24 hours before SH with different delays and the remaining time
TABLE 6B SH within 24 hours before and at rest with different delaysSBGI (n) of (B) and (C)
*Best solution
As can be seen from tables 6A and 6B, both LBGI and SBGI increased highly significantly within 24 hours before SH. Therefore, attempts have been made to run a direct discriminant or logarithmic model to predict the impending SH. Unfortunately, this standard is not statistically very effective, although both models are statistically significant. The discriminant model (which works better than log regression) is able to correctly predict 52.6% of the upcoming SH events. However, the ratio of flag-up (f.lag-up) to flag-down (flag-down) is very poor-Rud1: 4. Thus, the model tends towards a larger number of data points, which is a tendency to be expected in any statistical procedure. Therefore, we have to adopt the decision rule given above.
Prediction accuracy for severe hypoglycemia
Threshold parameterα、βAndγis optimizedBelow we describe in detail the predictive capability of the algorithm 3 using various combinations of threshold parameters alpha, beta and gamma. Because these parameters are related to the expected result (highly predicted SH and ratio R)udMin) is very complex, so the optimization procedure we use cannot obtain a single solution. Furthermore, it does not appear to be necessary to obtain a single solution. It appears to be a commercial and non-mathematical decision that an acceptable percent prediction of SH is obtained given a ratio of "sign up" to "sign down". Therefore, we do not state that any of the solutions given below are optimal solutions. However, to further explore this topic, we accepted that 50% of the future SH and R can be predictedud1:10 as a benchmark, for studying prediction periods other than 24 hours and for obtaining a better risk distribution (pr)ofile) the number of SMBG readings required per day.
Table 7 shows the performance of algorithm 3 for various combinations of values of α, β and γ, which represent the ratio R of the predicted percentage of SH (hit rate) to what we call the "annoyance indexudThe relationship between them. Table 7 also includes the average total time (in days) each subject experienced in the alerted and non-alerted states during the study, that is, by clarifying the ratio RudThe algorithm of significance is a summary result of the alternating process of warned-no warning periods experienced by the study subject.
TABLE 7 SH prediction hit rate, disturbance index and mean time
The best solution was used for further analysis. Assuming that the participants in this study experienced an average of 4.7 SH events, a high alarm period of 19 days appeared to be acceptable if the alarm could prevent 50% of SH. In addition, high alarm periods tend to come in clusters (in clusters). We can therefore assume that in practice, long and relatively quiet periods alternate with few days of high-risk alarm periods. The last row in Table 7 gives RudA solution of 1:7, which is equivalent to the solution given in the present disclosure. However, the current solution has a hit rate of about 10% higher: 53.4% versus 44% of our previous algorithm. When the hit rate is comparable to our previous algorithm, the disturbance ratio is less than 1:20, that is, 3 times better.
FIG. 14 shows the hit ratio in percent versus the ratio RudA smooth dependency between them. It is clear that the ratio between "flag up" and "flag down" increases rapidly as the hit rate of algorithm 3 increases. Therefore, given these data, it is not appropriate to find combinations of parameters that can yield hit rates higher than 50%:
selectable prediction periodAt the start of the description of algorithm 3, we make the basic assumption that an SH event is considered to be predicted if the flag is established within 24 hours before the SH event. This assumption yields a hit rate reported in the previous section. We now calculate hit rates based on other prediction cycles ranging from 12 hours to 72 hours. Throughout the experiment, the parameters α, β and γ were fixed at 5.0, 7.5 and 1.5, respectively, that is, at the values of their best solutions in table 7. Thus, the mark-up ratio remains the same as the solution, Rud1:10, only the hit rate changed, as we changed the definition of hits. Fig. 15 shows the dependency between the prediction cycles and the corresponding hit rates.
It is clear that as the prediction period increases to about 24 hours, the hit rate increases rapidly, and then the increase in hit rate decreases slowly. We can therefore conclude that 24 hours ahead is the best and reasonable forecast period.
Optimal number of SMBG readings per dayFinally, we performed experiments to investigate how many readings are needed per day to produce the best SH predictions.
As we were saying at the beginning, a total of 399 SH events were reported. Of these events, 343 had SMBG readings 24 hours ago (3 more had readings within the previous 48 hours, and 4 more had readings 72 hours ago). There were additionally over 50 SH events (14%) without any reasonable advance SMBG readings contributing to the prediction. The 343 events with at least one SMBG reading in the previous 24 hours were used to calculate the hit rate in the previous section. The rest of the events are naturally excluded from the calculation.
Further analysis showed that the hit rate increased rapidly with increasing readings taken prior to the SH event. However, if we force a strict requirement on the number of available readings in order to consider an SH event, we will find that the number of SH events that meet this requirement will drop rapidly (table 8). This is because the subject is not complying with the requirements of the study and may be a good reason for some kind of alarm to be raised in future meters, i.e. if the SMBG readings are not taken at the proper speed, algorithm 3 will no longer be useful and the meter will be shut down.
Table 8 gives the number of SH events with a certain number of previous SMBG readings and the hit rate for these events by algorithm 3. The best row in the table contains the optimal solution of table 7, which is the basis for all subsequent calculations. All hit rates are given in a 24 hour prediction cycle, i.e., a mark within 24 hours before SH. We can conclude that the accuracy with which algorithm 3 predicts SH increases significantly as subject compliance increases. Performing 5 SMBG readings per day, the accuracy improves by 10% from the baseline 50% hit rate:
TABLE 8 Performance of Algorithm 3 when given a certain number of previous SMBG readings
| Number of previous SMBG readings | SH events that meet the requirements of column 1 (% of the total SH) | Hit rate |
| At least 1 in 24 hours | 343(86%) | 49.9% |
| At least 3 in 24 hours | 260(65%) | 54.2% |
| At least 4 in 24 hours | 180(45%) | 57.2% |
| At least 5 in 24 hours | 103(26%) | 64.1% |
| At least 4 in 36 hours | 268(67%) | 52.6% |
| At least 5 in 36 hours | 205(51%) | 54.6% |
| At least 6 in 36 hours | 146(37%) | 60.3% |
| At least 7 in 36 hours | 107(27%) | 60.7% |
| At least 6 in 48 hours | 227(57%) | 53.3% |
| At least 7 in 48 hours | 187(47%) | 54.0% |
| At least 8 in 48 hours | 143(36%) | 55.9% |
| At least 9 in 48 hours | 107(27%) | 59.8% |
Other possible improvements tested
By including external parameters, e.g. number of SH or HbA in the previous year1cThe attempt to improve the prediction ability of the algorithm 3 was not successful. It is clear that the short-term prediction of SH depends mainly on current or recent events. However, the limitation of this study is that all participants experienced ≧ 2 SH in the previous year.
Finally, we examined whether an alarm for SH should be issued when an increased risk of hypoglycemia is detected and when there has been a period of time without SMBG. This is done to predict at least a portion of SH events that did not have SMBG readings before. But not successful, the result is mainly a false alarm. This result further confirms the importance of complying with the SMBG protocol, which has sufficiently frequent SMBG readings.
Appendix: abstract
Example No.1 estimates the frequency of hypoglycemia and SH (defined as unconsciousness or unconsciousness with no self-treatment) after a hypoglycemic (BG <3.9mmol/l) event.
85 experienced in the last year>2 patients (41 women) with T1DM for SH events, underwent 3-5 SMBG daily for 6-8 months, and recorded SH events by date and time. The average age of the subjects was 44 + -10 years, the duration of diabetes was 26 + -11 years, HbA1c7.7 +/-1.1%.
All SMBG readings (75,495) are combined by date and time with the subject's SH event (n 399; SH events generally do not have corresponding SMBG readings). The time elapsed since the last low BG (<3.9mmol/l) was calculated for each SMBG reading or SH event. Table 9 below gives the 3 hypoglycemic ranges: percentages of readings in BG <1.9mmol/l, 1.9-2.8mmol/l, and 2.8-3.9mmol/l, and percentages of SH events with low BG readings (BG <3.9mmol/l) at 24 hours, 24-48 hours, 48-72 hours, and over 72 hours before. The last column gives a running test that rejects the assumption that the dates containing the low BG readings (or SH events) are randomly distributed over the entire time frame. The negative Z values of the test show the "clustered appearance" of dates with and without hypoglycemic readings or SH events.
TABLE 9 percent hypoglycemia/SH previous with Low BG:
we conclude that there are more than half hypoglycemic SMBG readings and an SH event of about 2/3 has at least one hypoglycemic reading within 24 hours before it. Furthermore, hypoglycemic events tend to occur in clusters. Thus, the initial hypoglycemic event may be an alarm signal that hypoglycemia is about to reoccur.
Example No.2
The invention employs daily self-monitoringBlood Glucose (SMBG) data and is directly suitable for being able to predict HbA by introducing1cAnd intelligent data interpretation logic for a significant hypoglycemic high-risk period improves the performance of the household SMBG device. The method comprises two parts: (1) algorithm 1 estimation of HbA1cAnd (2) Algorithm 2&3 predict significant hypoglycemia in the long and short term (within 24 hours), respectively. In this report, we describe proposing, optimizing and validating HbA1cSteps of estimation algorithm 1 and its lab-derived HbA estimation1cThe accuracy of (2).
The target is as follows:
our main objective is to achieve a 95% measurement at the laboratory reference + -1 HbA1cPrecision in units, this is HbA1cNational glycosylated hemoglobin standardization program Criterion (National Glycohemoglobin standardization program Criterion) for assay precision.
Method:
ObjectSMBG data were obtained from 100 type 1 diabetic subjects and 100 type 2 diabetic subjects (T1DM, T2DM) over 6 months and 4 months, respectively, and HbA was performed at0, 3 and 6 months in T1DM1cAnd tested at months 0,2 and 4 in T2 DM.
Algorithm 1 extraction and optimizationTraining data set comprised SMBG and HbA collected for 3 months by T1DM1cData, and 2 months of SMBG and HbA collected by T2DM1cAnd (4) data. These training data are used to optimize algorithm 1 and to evaluate a number of sample selection criteria for ensuring higher accuracy. The sample selection criteria are requirements for the SMBG sample collected by the meter, which if satisfied, can ensure accurate HbA estimation from the sample1c. Thus, the meter will scan each SMBG sample and calculate and display HbA if the sample selection criteria are met1cAnd (6) estimating. After analyzing a large number of cut points (cut points), we have chosen the following criteria:
1. frequency of measurement to generate HbA1cThe meter requires 2.5 or more tests per day on average over the past 60 days, i.e., a total of 150 SMBG readings over the past two months. It is important to note that this is the average number of days and that daily testing is not required.
2. Data randomization of 60 day samples with only postprandial testing or inadequate night testing (< 3% samples) were excluded. It also includes avoiding tests that are highly focused on one of the most common times of day. These standards are described in detail in this report.
As a result: prospective verification and precision of algorithm:
Then, an algorithm, including sample selection criteria, is applied to test data set 1 and independent test data set 2, wherein test data set 1 includes the last HbA of T1DM and T2DM subjects1cSMBG and HbA within 2 months prior to testing1cData, whereas independent test data set 2 consisted of data from 60T 1DM subjects enrolled in a previous NIH study. For verification purposes, the estimate obtained by algorithm 1 is compared to the reference HbA1cThe levels were compared. In test data set 1, the algorithm achieved the NGSP criterion at the laboratory reference. + -. 1HbA1cThe accuracy in units is 95.1%. In test data set 2, the algorithm achieved the NGSP criterion, which was also at the laboratory reference. + -. 1HbA1cThe accuracy in units is 95.1%. Studies of sample selection criteria have shown that 72.5% of subjects are able to produce one such accurate estimate per day, and 94% of subjects are able to produce 1 such accurate estimate approximately every 5 days.
And (4) conclusion:daily SMBG data allows accurate estimation of HbA1cAnd satisfies direct HbA1cAssay accuracy NGSP standard.
Object&Standard of taking or rejecting
We obtained 100 patients with type 1 diabetes mellitus (T1DM) and 100 subjects with type 2 diabetes (T2 DM). 179 subjects completed the major part of SMBG data collection, of which 90 had T1DM and 89 had T2 DM. The data for these 179 subjects was used to test algorithms 2 and 3. However, test algorithm 1 requires that subjects have not only SMBG data, but also HbA collected within 60 days before SMBG1cData and SMBG records. At month 3 of the study (2 months of death for T2DM), 153 subjects (78 with T1DM) completed HbA meeting the above criteria1cData and SMBG records. Furthermore, we used data verification algorithm 1 with N-60 subjects with T1DM who participated in our previous NIH study (NIH). The demographic characteristics of all subjects are given in table 10.
TABLE 10 demographic characteristics of subjects
| Variables of | T1DM | T2DM | NIH |
| Age (year of old) | 41.5(11.6) | 50.9(8.1) | 44.3(10.0) |
| Sex: % of female | 41% | 43% | 46% |
| Duration of diabetes mellitus (year) | 20.1(10.1) | 11.7(8.2) | 26.4(10.7) |
| Index of mass of body | 25.4(4.7) | 34.2(8.1) | 24.3(3.4) |
| Standard HbA | 7.5(1.1) | 8.5(2.1) | 7.6(1.0) |
| Second HbA | 7.3(1.2) | 7.9(1.6) | 7.4(0.8) |
| Third HbA | 7.0(0.9) | 7.5(1.1) | - |
| # SMBG readings/subject/day | 5.4(2.3) | 3.5(0.8) | 4.1(1.9) |
| In the second HbADays with SMBG readings within the previous 2 months | 56.9(5.4) | 57.3(4.3) | 37.5(14.3) |
Observed meter error
Our investigations show that HbA1cThe primary reason for not completing the data within 60 days prior to the assay or other assay is not subject noncompliance, but rather meter failure. It is clear that if the patient presses the "M" button too long, the time and date of the ONE TOUCH ULTRA meter will "jump" to a random date/time (e.g., 11 months in 2017). Upon returning to the date/time we examined each meter, we found that 60 meters had such an event occurred throughout the study. The shift in time/date affects 15,280 readings, or nearly 10% of all readings. We save these readings separately and let a student check them. In many, but not all cases, he is able to recover the date/time sequence of readings. This error, and the loss of a few meters during mailing, resulted in a reduction in the number of subjects with good data for analysis algorithm 1 from 179 to 140. The data of 12 subjects were recovered and 153 subjects were collected, 78 of them had T1DM and 75 had T2DM, and they were in HbA1cThe time sequence of the previous data is not disturbedChaotic and therefore suitable for the verification algorithm 1.
Procedure
All subjects filled out the permission form of the IRB recommendations and participated in the orientation meeting (the on TOUCH ULTRA meter) where they were introduced and completed the on-screen questionnaire. Immediately after the guideline, all subjects visit the UVA laboratory and are bled to obtain baseline HbA1c. T1DM subjects underwent 6 months and laboratory HbA was performed at months 3 and 61cTesting; subjects with T2DM were performed for 4 months and laboratory HbA was performed at months 2 and 41cAnd (6) testing. Self-monitoring (SMBG) data is regularly downloaded from the meter and stored in a database. Significant hypoglycemic and hyperglycemic events were recorded in parallel every two weeks by an automated e-mail/phone tracking system.
Data storage and scrubbing
The ONE TOUCH ULTRA coarse data for the T1DM and T2DM objects, respectively, are stored in the InTouch database. Custom developed software is used to clean up these coarse data objects and meter errors, and in some cases manual data cleaning (see meter errors above). When no correction is possible, the data is discarded.
To ensure that our optimization results can be generalized to the level of the entire population, the algorithm was first optimized with a training data set and then examined with a test data set.
Training data setHbA determination of month 3 including subject T1DM1cSMBG data from 60 days previously collected. This data set is used to optimize the formula for algorithm 1. T2DM subjects were HbA at their 2 nd month determination1cThe previously collected data was used to identify sample selection criteria that were not evident in the T1DM data. However, the data of the T2DM object is not used to optimize the formula of algorithm 1.Dat is the file containing these data as pass01.
Test data set 1HbA determination at month 6 including subject T1DM1cPreviously, T2DM subjects had HbA at month 4 determined1c60 days SMBG data collected previously. In the following we will refer to these data as data set 1. The file containing these data is pass02.
Test data set 2Data from previous NIH studies with N-60 subjects T1DM were included. These data were collected using an ONE TOUCH PROFILE Meter. In the following we will refer to these data as data set 2. The file containing these data is hat0.xls.
Variables in pass01.dat, pass02.dat and hat0.xls are as follows:
ID. Month, day, time, year-self-explanatory ID number and reading time.
PLASBG-BG recorded by One Touch Ultra (N/A in HAT0.DAT, since One Touch Profile is used).
RISKLO, RISKHI-control variables that represent the results of the data transformation (see below).
BG and BGMM-conversion of BG to whole blood BG, followed by expression in mmol/l (see below).
Aggregate data (for each object), HbA1cIts estimated value and estimated error are stored in Excel files pass01.xls and pass02.xls.
Variables in passs 01.xls, passs 02.xls, and hat1.xls are as follows:
ID. Type of diabetes
HBA 1-reference baseline HbA1cValue of
HBA 2-month 3 reference HbA1c(2 months for T2DM) — this needs to be predicted;
EST2 and ERR 2-HbA1cAnd an estimate of its error;
control variables (all variables used by algorithm 1):
BGMM 1-average BG in mmol/l (see section 2 below);
RLO1, RHI 1-Low and high BG indices (see section 2 below);
l06-night Low BG index-calculated from a reading at midnight-6: 59a.m (i.e., if (0.1 e.HOUR.1e.6));
NC1 is the number of SMBG readings in the past 60 days;
nda ys-the number of days in the past 60 days with SMBG readings.
N06-time interval 0-6: 59; percentage of SMBG readings from 7-12: 59;
ECLUDE is 0, 1-if ECLUDE is 1, the algorithm is recommended to exclude the sample.
The files pass01.dat and pass1.xls can be matched by the ID number of the object. Similarly, the files pass02.dat and pass2.xls, hat0.xls and ahat1.xls can also be matched by the ID number of the object. The coarse data and the entire second generation data file are sent to LifeScan limited.
Proposal of algorithm 1
Derivation of a formula:
most of the explanation and presentation of algorithm 1 is given in example No.1 of the present project. Example No.1 does not include data collection. Instead, we used a data set collected by the Amylin pharmacist in a clinical trial. Example No.1 presents three possible formulas for estimating HbA from SMBG data1cFormula using average SMBG, low and high BG indices; (2) using average SMBG and previous reference HbA1cA formula for the reading; and (3) a simple linear formula using only average SMBG (see example No. 1).
Another method for evaluating HbA is also proposed1cObjective criteria for accuracy (in example No.1, we used least squares estimation%Error and absolute error estimate the accuracy of each equation). This new requirement is translated into a different optimization criterion for algorithm 1, i.e. the optimization of the formula no longer yields the least squares of error sum (least squares estimate), but rather fixes the estimate at HbA1cWithin a uniform range of reference values ± 1.
To do so, we only analyzed the error of our original linear model (the formula of example No.1) with respect to this uniform match using the training data of the T1DM object. We found that these errors are positively correlated with the high BG index of the subject (r ═ 0.3), and we used this relationship to correct our original linear model. We found that it is preferable to use high BG indices as grouping variables, divide the subject samples into groups of increasing high BG indices, and introduce corrections to the linear model in each group. Our idea is to introduce a modification using a low BG index into each particular group, rather than into all samples as suggested in example No. 1. This variation is indicated with different optimization schemes based on the NBSP standard.
Therefore, from the training data of the T1DM subject, we completed algorithm 1 as follows:
part 1-data preprocessing
BG ═ PLASBG/1.12 (plasma BG was converted to whole blood BG, which is common).
BGMM equals BG/18 (conversion of BG to mmol/l).
The following rows calculate the low and high BG indices for each SMBG reading:
COM SCALE=(in(G))**1.08405-5.381.
COM RISK1=22.765*SCALE*SCALE.
COM RISKLO=0.
IF(BG≤112.5)RISKLO=RISK1.
COM RISKHI=0.
IF(BG>112.5)RISKHI=RISK1.
the following rows aggregate the data for each object:
BGMM1 — average per subject (BGMM);
RLO 1-average per subject (RISKLO);
RHI 1-average per subject (RISKHI);
l06 is the average calculated for nighttime readings only (RISKLO), and is absent if there are no nighttime readings.
N06, N12, N24-time intervals 0-6:59, respectively; 7-12:59 and 18-23:59, e.g., if (0. ltoreq. HOUR. ltoreq.6); if (7 is more than or equal to HOUR is less than or equal to 12) and if (18 is more than or equal to HOUR is less than or equal to 24).
NC 1-total number of SMBG readings in the past 60 days;
nda ys-the number of days in the past 60 days with SMBG readings.
Part 2-estimation procedure:
The estimation procedure is based on the linear model of our example No. 1:
HbA1c=0.41046*BGMM+4.0775.
analyzing the error of this formula we find that the error depends on the high BG index. Therefore, we classify all the objects according to their high BG indices and then amend the linear model in each group as follows:
A. each subject assigns a group according to his/her high BG index:
if(RHI1≤5.25 or RHI1≥16) GRP=0.
if(RHI1>5.25 and RHI1<7.0)GRP=1.
if(RHI1≥7.0 or RHI1<8.5) GRP=2.
if(RHI1≥8.5 or RHI1<16) GRP=3.
B. for each group, we estimate as follows:
E0=0.55555*BGMM1+2.95.
E1=0.50567*BGMM1+0.074*L06+2.69.
E2=0.55555*BGMM1-0.074*L06+2.96
E3=0.44000*BGMM1+0.035*L06+3.65.
EST2=E0
if(GRP=1) EST2=E1.
if(GRP=2) EST2=E2.
if(GRP=3) EST2=E3.
C. a few outliers (outlier) were corrected:
if(missing(L06)) EST2=E0.
if(RL01≤0.5and RHI1≤2.0) EST2=E0-0.25.
if(RL01≤2.5and RHI1>26) EST2=E0-1.5*RLO1.
if((RLO1/RHI1)≤0.25 and L06>1.3) EST2=EST2-0.08.
accuracy standard
To evaluate the accuracy of algorithm 1, we used several standard criteria:
1) the NGSP accuracy standard requires that at least 95% of all estimates are located in HbA1cReference value. + -. 1HbA1cWithin the unit.
2)HbA1cAverage absolute deviation of the estimated value from the measured value;
3)HbA1cthe mean percent deviation of the estimated value from the measured value.
Attention is focused onNGSP accuracy standardIs to measure HbA directly for examination1cThe apparatus of (1). Here, we apply the standard to the HbA pair by SMBG data1cIs estimated. However, the purpose of this estimation is not to replace HbA1cLaboratory measurements, but rather help patients and physicians with the routine management of diabetes. In contrast to laboratory measurements, this estimation employs data that can be obtained in any way and can be obtained on a daily basis without requiring special equipment or visiting a doctor's office.
To confirm HbA1cThe other direct measurement of (A) was consistent with the traditional laboratory measurements, we examined blood samples from 21 IDDM patients and analyzed HbA simultaneously with DCA2000 and clinical assays1c. Of the 21 measurements, there was a large error of 2.5HbA1cUnits. Table 11 gives the accuracy results for the FDA recommended office equipment:
TABLE 11 accuracy of DCA2000 in T1DM
| DCA2000 | |
| NGSP Standard-at. + -. 1HbAPercentage in unit | 95.2% |
| Mean absolute error (HbA)Unit) | 0.45 |
| Mean percentage error | 5.7% |
Sample selection criteria
Formula derivation:
HbA1cThe estimation of (2) used SMBG for 60 consecutive days. We sampled this continuous 60 day SMBG data. During his/her SMBG, each person may produce a large number of samples. In fact, each timeThe next new measurement will produce a new sample that is slightly different from the previous one. Thus, it will of course be assumed that the meter has some control points to ensure that it is used to estimate HbA1cQuality of SMBG sample data.
Thus, the generalized algorithm formula, after being optimized, was applied to the entire training data set (T1DM and T2DM object data) to investigate that SMBG samples would produce inaccurate HbA1cThe condition of the estimation.
This investigation focused on the following patterns that occurred in SMBG leading to inaccurate estimates:
1) SMBG infrequent-a certain number of readings are required over two months in order to estimate HbA1cIf the number is not reached, the estimation may not be accurate;
2) when subjects are primarily concerned with high BG, either by testing after meals or by taking oral medications, a pattern of SMBG favoring hyperglycemia may occur;
3) the time skewed pattern of SMBG, which is tested primarily at fixed times of day, results in less than a good daily distribution of BG fluctuations for the subject.
After investigating these patterns, we selected the best sample selection criteria based on the most accurate and least excluded tangent points. For a detailed description of the program logic and for coding purposes, reference is made to appendix A.
Final sample selection criteria:
standard 1 test frequencyThe algorithm requires that the 60-day samples include an average of at least 2.5 tests per day, i.e., there are at least 150 SMBG readings over the past 60 days to produce HbA1cEstimated value (NC 1)>=150)。
Standard 2. data randomization:
2a) oral treatment/postprandial testing:(RLO1/RHI1>0.005). Of some SMBG samples, of SMBGThe distribution appears to be highly biased towards hyperglycemia. This occurs mainly in T2DM subjects, who seem to measure BG only at night. We assume that these samples do not include tests in the hypoglycemic range. Our investigation showed that about 1/3 of these samples will be directed to HbA1cYielding an excessively high estimate (2/3 still yields an accurate estimate). According to our recommendation, if a biased sample is encountered, the meter does not display the result, and the calculation is formulated as having an LBGI of at least 1/2% of the HBGI.
2b) Night test:(NO6>3%). This criterion ensures that at least a part of nighttime hypoglycemia can be explained. The standard requires that 3% of all readings be taken at night (midnight-7: 00 am).In other words, it is possible to provide a high-quality imageThe sample is acceptable if at least 5 of the 150 readings taken within 2 months are at night. Note that patients are generally advised to test at night, so this criterion can promote good management.
2c) Test mode for preventing high anomalyIf readings exceeding 3/4 are taken at any 6 hour interval per day, the sample is not able to produce an estimate. For example, if 80% of the tests in a sample were performed just after breakfast, no estimation is made. This standard is required by LifeScan, inc to prevent people from trying to "confuse the algorithm" (beat the algorithm), thereby allowing us to ensure effectiveness, particularly to clinicians.
Accuracy of sequential adoption of selection criteria in training data sets
The following table illustrates the effect of the selected sample selection criteria on the accuracy and number of exclusions in the training data set. Note that the precision of the final version of algorithm 1, which is part of this study (final algorithm), and the precision of the simplest linear function presented in example No.1 and included in example No.1 (see first linear model) are proposed.
We show the accuracy of each model when the sample selection criterion 1, detection frequency, # read NR ≧ 150, and criterion 2, data randomization, are applied in the absence of any sample selection criterion and in sequence, as described above.
As seen in all tables, the accuracy of algorithm 1 improves with sequential adoption of the sample selection criteria, and reaches 95% of the NGSP requirements after all criteria are applied. The latter result is highlighted in the table.
Table 12A final sample selection criteria in the training data set-all subjects:
TABLE 12B Final sample selection criteria in the training data set-T1 DMThe coefficients of algorithm 1 are optimized in this sample, which explains the high accuracy even without sample selection.
TABLE 12C Final sample selection criteria in training data set-T2 DMSample selection criteria 2 (data randomization) was proposed primarily with this sample, which explains the increased 5% precision when applying this criterion.
Sample exclusion frequency in training data:
The meter has the opportunity to estimate HbA at each new reading1c. If the sample does not meet the selection criteria, the meter will not display HbA1cAnd will:
(c) wait until a suitable sample is collected, or
(d) If no appropriate sample is collected, e.g., someone has a permanent bias measurement mode, the meter will issue a prompt to modify the SMBG pattern.
Our survey shows that most of the objects (>95%) will obtain at least 10 HbA within 60 days1cEstimates (as long as their measurement frequency is sufficient) are obtained, while only 2% of subjects will not have estimates due to the biased measurement mode. These 2% of subjects need to be prompted to correct their measurement modality. The final result of this investigation is given below:
we calculated (over 60 days) how many days the measurements would fail to show the user HbA due to the sample not meeting the selection criteria1cAs a result:
5) for 72.5% of all subjects, the meter was able to report HbA daily1c;
6) For another 7.5% of all subjects, the meter was able to report (in 60 days) HbA for 45-59 days1c;
7) For another 10% of all subjects, the meter was able to report (out of 60 days) HbA for 12-44 days1c;
8) For 9 subjects (5.9%), the meter was not able to report HbA1cUnless they change the SMBG mode.
Attention is focused onMost of these subjects did not get an estimate because they did not meet the test frequency criterion 1, i.e. their samples were always less than 150 readings. Thus, at least 94% of all subjects are able to obtain at least one HbA approximately every 5 days1cWithout changing their measurement (this includes T1DM and T2 DM).
If we require at least 150 readings in 60 days, only 3 subjects could not obtain HbA1cEstimating:
1) 95.6% at least 10 HbA were obtained within 60 days1cEstimating;
2) 2.2% did not yield any estimates.
Therefore, about 98% of subjects who measure HbA on average 2.5 times per day over 60 days will receive HbA1cEstimate that there are>95% will obtain at least 1 estimate per week. We conclude that sample selection criteria 2-data randomization vs. display of HbA over a period of time1cThe effect of the estimation is minimal. Only about 2% of subjects need to be prompted to improve their SMBG style.
It should be noted that the sample selection criteria can be used to improve any estimated HbA1cThe accuracy of the formula (c). The selection criterion is independent of any particular algorithm/formula and is applied before the estimation starts. For example, when used, the sample selection criteria will improve the accuracy of the most recently proposed algorithm 1 as part of the study, as well as the accuracy of our original linear model proposed in example No. 1.
In addition, it is desirable to examine the effect of some other sample selection criteria to show that we can further improve accuracy. For example, when one of the original test frequency criteria was applied to the data, it could prove more effective. This standard is further illustrated in appendix E.
Prospective verification of algorithm 1:
accuracy in test data set 1:
The algorithm, including the final sample selection criteria, was then applied to test data set 1(T1DM 1 and T2DM subjects last HbA)1cSMBG 2 months prior) to produce HbA1cAnd (6) estimating. These estimates are then compared to HbA1cThe reference values are compared to prospectively verify algorithm 1. Table 13 gives an overview of this verification. For a more detailed description of the effect of each sample selection criterion on the algorithm, see appendix C.
Table 13: precision of algorithm prospective application:
Testing accuracy in data set 2:
Another independent NIH data set (N-60 subjects with T1DM) demonstrated similar accuracy of the results, i.e. 95.5% were at the laboratory reference ± 1HbA1cIn units (table 14):
TABLE 14 precision of Algorithm 1 in independent NIH data sets:
| All objects | |
| NGSP Standard- + -1 HbAPercentage in unit | 95.5% |
| Mean absolute error (HbA)Unit) | 0.42 |
| Mean percentage error | 5.9% |
Comparison of accuracy of Algorithm 1 with FDA recommended office Equipment accuracy:
As shown in Table 15 below, the accuracy of Algorithm 1 correlates with HbA used in the physician's office1cThe assay precision is equivalent. As explained in the accuracy standards section, DCA2000 data is used to validate HbA1cWhether the other direct measurements of (a) are consistent with the laboratory measurements. We analyzed HbA of 21 patients with T1DM using both DCA2000 and clinical assays1cA blood sample. There was a large error in these 21 tests, 2.5HbA1cUnit:
TABLE 15 accuracy of DCA2000 in T1DM compared to Algorithm 1:
| DCA2000 | Test data set 1 | Test data set 2 | |
| NGSP Standard- + -1 HbAPercentage in unit | 95.2% | 95.1% | 95.5% |
| Mean absolute error (HbA)Unit) | 0.45 | 0.45 | 0.42 |
| Mean percentage error | 5.7% | 6.2% | 5.9% |
Frequency of sample exclusion in test data sets:
As we discussed in the section that sets forth Algorithm 1, the meter has the opportunity to estimate HbA at each new reading1c. If the sample does not meet the selection criteria, then the meter will not display HbA1c。
We used test data sets 1 and 2 to prospectively estimate the frequency of sample exclusion. To this end, our calculation is based on how many days (in 60 days) the meter can display HbA to a person1cI.e. how many days one can have samples that meet the sample selection criteria. Tables 16A and 16B give summaries of these results in test data sets 1 and 2. We included data for all subjects, divided into data for subjects measuring 1.5 times/day on average (90 SMBG readings over 60 days) and subjects measuring 2.5 times/day:
TABLE 16A frequency of sample exclusion in test data set 1
| All objects (N148) | Average measurement ≧ 1.5 times/day (N ═ 146) | Average measurement ≧ 2.5 times/day (N ═ 130) | |
| The meter is capable of reporting HbA dailyPercentage of objects of | 69.6% | 72.6% | 77.7% |
| The meter was able to report HbA every 3 daysPercentage of objects of | 87.8% | 91.1% | 93.1% |
| The meter is capable of reporting HbA once a weekPercentage of objects of | 91.9% | 95.5% | 96.9% |
TABLE 16B frequency of sample exclusion in test data set 2
| All objects (N60) | Average measurement ≧ 1.5 times/day (N ═ 55) | Average measurement ≧ 2.5 times/day (N ═ 30) | |
| The meter is capable of reporting HbA dailyPercentage of objects of | 51.7% | 83.6% | 80.0% |
| The meter was able to report HbA every 3 daysPercentage of objects of | 95.0% | 100.0% | 100.0% |
| The meter is capable of reporting HbA once a weekPercentage of objects of | 96.7% | 100.0% | 100.0% |
Conclusion:
Tables 13-16 demonstrate that the meter is capable of producing on average a week an HbA that meets the 95% NGSP accuracy criterion1cAn accurate estimate is made for a person who measures 2.5 times per day on average>96%。
Appendix-software logic of sample selection criteria
Sample selection criteria are sent to the subject-suggesting some SMBG samples are excluded by the algorithm, or a message. The sample selection criteria are programmed as follows:
standard 1 test frequencyThe present algorithm requires that the 60 day samples include an average of at least 2.5 measurements per day, i.e., there are at least 150 SMBG readings over the past 60 days to produce HbA1cEstimating:
EXCLUDE=0
if(NC1>=150) EXCLUDE=1
standard 2 data randomization:
2a)Oral treatment/postprandial testingIn some SMBG samples, the SMBG distribution appears to be highly biased towards hyperglycemia. This occurs mainly in the T2MD subjects, which are similar to themIt is almost only measured at high BG. We assume that these samples do not include results measured in the hypoglycemic range. Our survey showed that about 1/3 of these subjects overestimated HbA1c(the object of 2/3 still produces an accurate estimate). Accordingly, we suggest that if the sample is biased, the meter will not display the result and the calculation is formulated to have an LBGI of at least 1/2% of the HBGI.
if(RLO1-RHI1<0.005)EXCLUDE=1。
2b)Night test:(NO6>3%). This criterion ensures that at least a portion of nighttime hypoglycemia can be explained. The standard requires that 3% of all readings be taken at night (midnight-7: 00 am). In other words, the sample is acceptable if at least 5 of the 150 readings taken in two months are at night. Note that patients are usually recommended to test at night, so this standard can promote good management.
if(NO6≤3.0) EXCLUDE=1。
2c)Ensuring prevention of highly abnormal test patternsIf readings exceeding 3/4 are taken at any 6 hour interval of the day, the sample will not yield an estimate. For example, if 80% of the tests in a sample are performed just after breakfast, no estimate will be made. This standard is required by LifeScan corporation to ensure that people try to "confuse the algorithm", thereby allowing us to ensure effectiveness, particularly for physicians. In our data, no samples are so highly anomalous as to trigger this criterion (see appendix B — criterion 2c for detailed information). Depending on the software implementation, the following frequencies need to be calculated from the SMBG data:
m12-6:00 am-SMBG readings at noon (breakfast)
M18-SMBG readings at noon-6: 00pm (lunch)
SMBG readings of M24-6:00pm-12: 00% (dinner)
SMBG readings% of M06-12:00-6:00am (night)
SMBG readings of M15-9:00am-3:00 pm%
SMBG readings of M21-3:00pm-9:00 pm%
SMBG readings of M03-9:00pm-3:00 am%
SMBG readings of M09-3:00am-9:00 am%
Then for any combination (i, j) above:
if(Mij>75.0)EXCLUDE=1。
appendix B-sample selection criteria 2C
This standard is required by LifeScan corporation to ensure that highly abnormal test patterns are prevented. The purpose of this criterion is to prevent people from "confusing the algorithm".
Basically, the present standard specifies: if 3/4 (or other desired number) of your readings were taken at any 6 hour interval of the day or other desired interval, you would not be able to get an estimate.
Thus, for example, if the over 3/4 test is performed after dinner, no estimate will be obtained. This would further support our general statement that people who do not test at random are not included in this calculation. I believe that the special calculations and coding for this may look complex, but the key is that we can just enter "you must test at random during the day" or other similar words as a panel statement that can cover all of our exclusion criteria (excluding test frequency). If we need, we can simply enter a more precise definition in the refined sentence (fine print) "no reading can exceed 75% of all readings for any 6 hour interval of the day". This ensures that, following our criteria, the clinical acceptance of the algorithm can be improved.
In more detail:
the 46 hour intervals are defined as follows:
6:00 am-noon (breakfast)
Noon-6: 00pm (lunch)
6:00pm-12:00 (dinner)
12:00-6:00am (night)
The criterion can be run twice at different time intervals, preventing people from focusing on testing near the intersection of the 6 hour interval but still incorrectly meeting the first criterion. For example, if one had 40% at 11:50pm and 40% at 12:10pm, it would still be a focused test, although the first pass of the interval was met, but the second pass of the interval was not met.
Second set of intervals:
9:00am-3:00pm
3:00pm-9:00pm
9:00pm-3:00am
3:00am-9:00pm
note that alternatively, from a coding point of view, one can operate as follows with the same result:
readings for any 18 hour period must not be less than 25% of the total readings.
You must run it in 18 hour periods overlapping each other by 3 hours:
9:00am-3:00am
12:00 noon-6:00 am
3:00pm-9:00am
6:00pm-12:00 noon
9:00pm-3:00pm
12:00 midnight-6: 00pm
3:00am-9:00pm
6:00am-12:00 midnight
Appendix C-test data set1 sample selection criteria increasing effect on algorithm accuracy
As explained in the algorithm proposal section, the following table relates to the accuracy of algorithm 1 (final algorithm) proposed as part of the present study and the simplest linear function (see initial linear model) proposed in example No.1 and contained in example No. 1. The table gives the accuracy of algorithm 1 with no sample exclusion in test data set 1 and applying two sample selection criteria in sequence:
standard 1-frequency of test, # read NR ≧ 150, and
criterion 2-data randomization, as explained in the sample selection criterion:
TABLE 17A accuracy of Algorithm 1-all objects:
TABLE 17B accuracy of Algorithm 1 in T1DM:
TABLE 17C accuracy of Algorithm 1 in T2DM:
Appendix D-selectable test frequency Standard
Advanced test frequency standards are advantageous to improve the accuracy of algorithm 1 more significantly. This is because the use of the test frequency criterion 1 is not only based on data analysis, but also on other considerations. If criteria 1, which requires 150 readings within 2 months, is found to be too stringent, an alternative solution may be employed. Is the original test frequency standard which requires that 35 days (60 days) of SMBG readings have a frequency of 1.8 readings per day, i.e. a total of 63 readings for 35 days of 60 days. Table 18 confirms this with the original loose test frequency standard plus standard 2 (data randomization), with the accuracy of algorithm 1 exceeding 95%:
TABLE 18 frequency of use selection criteria (35 days reading 1.8 readings/day) Precision of Algorithm 1 with data randomization criteria:
Attention is focused onIn addition, the selectable criteria may screen out samples with large amounts of missing data, e.g., if the SMBG is interrupted for 4 weeks and then resumed, then HbA will not be displayed1cAnd (6) estimating. An obvious example of this pattern appears in test data set 2-his/her HbA1cThe subject with the largest error in the estimates collected 159 readings for only 30 out of 60 days. Thus, by collecting readings quickly in a few days, the subject can still meet the requirements of collecting 150 readings, but can result in inaccurate HbA1cAnd (6) estimating.
Exemplary definition of example No.2 (but not limiting herein)
6) Severe Hypoglycemia (SH) is defined as coma, seizure or unconsciousness (BG) that results in no self-treatment;
7) moderate Hypoglycemia (MH) is defined as severe neurohypoglycemia that disturbs the activity of the subject but does not interfere with self-treatment;
8) biochemical Severe Hypoglycemia (BSH) is defined as plasma BG readings of 39 mg/dl;
9) biochemical Moderate Hypoglycemia (BMH) is defined as plasma BG readings of 39-55 mg/dl;
10) all the above conditions are referred to asMarked hypoglycemia。
Additional objects
The data of this example was used to prospectively validate the following algorithm:
algorithm 2-classification algorithm using certain subject 30-45 day SMBG data, subjects were classified into certain risk ranges for developing future significant hypoglycemia. The classification is temporary, e.g., when the SMBG style of the object changes, the classification also changes.
Algorithm 3-data tracking/decision algorithm that uses a sequence of SMBG data to determine whether a flag is established for impending (24 hours) significant hypoglycemia. We now describe in detail the algorithms 1&2 and their test results.
Object
We obtained permission for 100 subjects with type 1 diabetes (T1DM) and 100 subjects with type 2 diabetes (T2 DM). 170 subjects, 90 with T1DM and 89 with T2DM, completed the major portion of SMBG data collection.
Procedure
All subjects were assigned an IRB recommended permission form and attended at the guild, and were introduced to the ONE TOUCH ULTRA meter at the meeting and completed the screen questionnaire. Immediately after the guideline, all subjects visited the UVA laboratory and bled blood for HbA measurement1cA reference value. Subject T1DM underwent laboratory HbA in the following 6 months at months 3 and 61cTesting; subject T2DM underwent laboratory HbA in the following 4 months at months 2 and 41cAnd (6) testing. Self-monitoring (SMBG) data is regularly downloaded from the meter and stored in a database. Significant hypoglycemia and hyperglycemia events were recorded in parallel by a custom designed automated e-mail/phone tracking system, which contacted all participants every 2 weeks.
Table 19 gives an abstract of SMBG and Severe/moderate hypoglycemia [ SH/MH ] data collection.
TABLE 19 summary of data Collection
| Variables of | T1MD (N being 90 objects) | T2DM (89 subject) |
| # SH event | 88 | 24 |
| # MH event | 1,660 | 190 |
| # SMBG reading | 92,737 | 35,306 |
| # BSH reading | 1,039 | 39 |
| # BMH reading | 5,179 | 283 |
The formulas of algorithms 2 and 3 do not change significantly. These equations are particularly similar to those given in the report of example No.1, month 3, 2002. There are only two changes, (a) the type in the SH/MH (example No.1) hazard range list is modified and (b) there is one line change in algorithm 3. The reason for the latter is explained below.
Because algorithms 1 and 2 remain unchanged, we can view the data collection of the entire example No.2 as a prospective test of these algorithms.
Equation of Algorithm 2
Algorithm 2 is performed as follows:
1) from the one-month SMBG data, each subject was classified as one of 15 risk Ranges (RCAT) according to his/her low glycemic index as follows:
if(LBGI≤0.25),RCAT=0
if(0.25<LBGI≤0.5),RCAT=1
if(0.50<LBGI≤0.75),RCAT=2
if(0.75<LBGI≤1.00),RCAT=3
if(1.00<LBGI≤1.25),RCAT=4
if(1.25<LBGI≤1.50),RCAT=5
if(1.50<LBGI≤1.75),RCAT=6
if(1.75<LBGI≤2.00),RCAT=7
if(2.00<LBGI≤2.50),RCAT=8
if(2.50<LBGI≤3.00),RCAT=9
if(3.00<LBGI≤3.50),RCAT=10
if(3.50<LBGI≤4.25),RCAT=11
if(4.25<LBGI≤5.00),RCAT=12
if(5.00<LBGI≤6.50),RCAT=13
if(LBGI>6.50),RCAT=14
2) the theoretical probability of future significant hypoglycemia is calculated by a two-parameter Weibull probability distribution and using the distribution function given as follows:
F(x)=1-exp(-a,xb) For any x>0; otherwise 0. The parameters of this distribution depend on the expected prediction duration and are illustrated in the report of example No. 1. If performed with a meter, this step will provide a continuous estimate of the risk of significant hypoglycemia, e.g., "50% in the next month".
3) Each subject is classified into a minimum, low, medium, or high risk group of future significant hypoglycemia, these ranges are defined as follows: minimum risk (LBGI ≤ 1.25); low risk (1.25< LBGI < 2.5); moderate risk (2.5< LBGI ≦ 5) and high risk (LBGI > 5). If performed with a meter, this step will provide a discrete estimate of the risk of significant hypoglycemia, e.g., "high risk in the next month".
Equation of Algorithm 3
First, to avoid calculating the reference hazard value given in algorithm 3 example No.1 report specification, we modified one line in the code. Algorithm 3 now turns to using the results of algorithm 2. We introduce this variation to introduce sample results for two subjects at 10, 28/2002. At this point it is clearly convenient to instantiate the actions of algorithm 3 with a simple Excel spreadsheet, and it is possible if the calculation of the reference values is avoided. This step does not change the accuracy of the algorithm 3, so it remains as a permanent change that facilitates the programming of the algorithm 3. No changes were introduced to algorithm 3 after 10/28 of 2002. Here we have given the same algorithm 3 formula as in the example No.1 report, the changed rows having been marked.
1) The Low BG hazard value (RLO) is calculated for each BG reading (where BG is measured in mg/dl and the coefficients will be different if the units are mmol/l) by the following code:
scale=(ln(bg))**1.08405-5.381
risk=22.765*scale*scale
if(bg_1≤112.5)then
RLO=risk
else
RLO=0
endif
2) for each SMBG reading, we calculate a running value lbgi (n), and another statistic sbgi (n), which is the standard deviation of the low BG risk values. The calculation of these two parameters is back-extrapolated from each SMBG reading with a specific label (n), i.e., including the reading and the (n-1) readings preceding the reading.
3) The calculation of lbgi (n) and sbgi (n) uses a temporal averaging procedure (provisionameans procedure) which results in a recursive encoding according to:
initial value at n (or exactly at maximum value (1, n-k) to account for meter readings with ordinal numbers less than k):
LBGI(n)=rlo(n)
Rlo2(n)=0
counting down any successive iteration j values between n and 1:
LBGI(j)=((j-1)/j)*BLGI(j-1)+(1/j)*RLO(j)
rlo2(j)=((j-1)/j)*rlo2(j-1)+(1/j)*(RLO(j)-LBGI(j))**2
after completing the cycle, we obtain the value of LBGI (n), then calculate
SBGI(n)=sqrt(rlo2(n))
From this calculation, we saved two sets of data, n 150 and n 50 (e.g., the first 150 and 50 observations).
4)Decision ruleAt each SMBG reading, the program determines whether a warning flag for impending SH is established. If the flag is already established, the program determines whether to lower the flag. These decisions depend on three threshold parameters α, β, γ, which operate as follows:
for low-mid risk subjects (LM group):
FLAG=0.
if(LBGI(150)≥2.5 and LBGI(50)≥(1.5*LGI(150)and SBGI(50)≥SBGI(150)) FLAG=1.
if(RLO≥(LBGI(150)+1.5*SBGI(150)) FLAG=1.
in other words, at each SMBG reading, a flag is established if one of two conditions is met:
1) classification subjects from the first 150 trials according to algorithm 2 were at moderate high risk SH, and increased both LBGI and SD of LBGI in the first 50 trials;
2) alternatively, a low BG index surge is determined by a second inequality.
A heuristic explanation of these statements is given in the report of example No. 1. As described above, the first statement of "if" has changed its original form in order to avoid using the reference LBGI and thus the output of algorithm 2.
Once the flag is established, it will remain for 24 hours, as illustrated in the report of example No. 1. To evaluate the accuracy of algorithm 3, we used the previously proposed technique — calculate two measurements:
1) predictive% of impending SH/MH events within 24 hours, and
2) ratio R of duration periods of "flag up" to "flag downud(nuisance index).
The% prediction of SH events needs to be higher, while the ratio RudNeed to be relatively low. This is because by increasing the percentage of predicted SH events, we will inevitably increase the number of "build flags", which in turn increases the number of potential "false alarms". Since "false alarm" is not clearly defined (see report of example No.1), we use RudAs an indicator of the effectiveness of algorithm 3.
Our previous best results given in the report of example No.1 are that SH/MH events, R, are predicted to be 50% within 24 hoursud1:10, i.e. after one day of high risk warning, there is no warning immediately 10 days. Here we will keep the same flag up/flag down ratio and calculate% prediction of SH and MH events within 24 hours for T1DM and T2DM subjects, respectively. For this prediction, we do not use BSH and BMH events, since this is recorded by the meter and is therefore a function of the predictionAnd (b) a portion.
Estimating the risk of significant hypoglycemia in 1-3 months-the accuracy of Algorithm 2
We estimate the predictive power of algorithm 2 as follows:
3) first, we calculated the LBGI from one month of SMBG data, and classified each subject into minimal, low, medium, and high risk groups of significant hypoglycemia as described above.
4) Then, in the following 1-3 months, we counted the number of SH, BSH, MH, and BMH events per subject prospective record.
Figures 16-19 below give the number of SH, BSH, MH, and BMH events observed in each subject after a one month SMBG in the future 1 or 3 months for T1DM and T2DM, respectively. Statistical comparisons are also included.
In addition, direct linear regression uses LBGI, SH history reported in the screen questionnaire based on the number of SH events in the past year, and baseline HbA1cRemarkably predict (R)2=0.62,f=48,p<0.0001) total number of imminent significant hypoglycemic events (SH + MH + BSH + BMH) within 3 months thereafter. The order of the predictor variables by their importance is: 1) LBGI (t ═ 8.2, p)<0.0001), one can account for the 55% variation in future significant hypoglycemia alone (e.g., R)20.55); 2) SH history (t ═ 3.6, p)<0.0005) can account for another 5% of the variation, and HbA1c(t=2.2,p<0.03), another 2% of the variation can be accounted for. This confirms the previous result that LBGI is the most important predictor of future hypoglycemia, while HbA1cThe contribution to the prediction is moderate.
The theoretical probability of future significant hypoglycemia calculated by the Weibull model is very consistent with the significant hypoglycemia events observed in the future-the coefficient of determination is over 90% for both severe and moderate events.
Impending (within 24 hours) significant hypoglycemiaPrediction of symptoms accuracy of Algorithm 3
The table below gives the accuracy of short-term predictions (over 24 hours) of SH and MH events for T1DM and T2DM subjects, respectively. If a certain number of SMBG readings are available for prediction over a 24 hour period, each row of tables 20 and 21 gives the percentage of events predicted. For example, the first row of each table gives the percentage of events that are predicted, regardless of whether there is an SMBG reading within 24 hours before a certain event. It can be seen that the accuracy of the prediction increases as the number of readings before the event increases. Thus, if someone measures 3 or more times per day, the meter can alarm and possibly help avoid more than half of the significant hypoglycemic events.
Attention is focused onFor the purpose of evaluating the accuracy of algorithm 3, we only used SMBG independent SH and MH events reported through the e-mail/phone system, which requires participants to report the date and time of SH and MH every two weeks. As our survey shows, sometimes participants use the time and date of the last SMBG reading before an event in their reports, rather than the exact time and date of the event, because the query meter helps them recall. As a result, there are a large number of events that have a time interval between the time of the last SMBG reading and the time of the event close to zero. To account for this suspicious time record, column 3 of each table gives the accuracy of algorithm 3 which is strictly limited only to events where the leading alarm time is at least 15 minutes. Assuming an average lead alarm time of 11 hours, we conclude that in most cases the alarm occurs early enough to be beneficial for adequate self-treatment.
In tables 20 and 21, the disturbance index is set to Rud>To match the report of example No. 1.
TABLE 20 precision of Algorithm 3 in T1DM
TABLE 20 precision of Algorithm 3 in T2DM
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The foregoing embodiments are therefore to be considered in all respects illustrative only and not limiting of the invention described herein. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (21)
1. A method of estimating HbA of a patient based on BG data collected over a first predetermined duration1cThe system of (a), the system comprising:
a database component for storing a database identifying the BG data; and
a processor programmed to:
preparation for estimating HbA using a series of mathematical formulas1cThe data of (a):
the pre-processing of the data is carried out,
validating samples of BG data by sample selection criteria, and
estimating HbA if the sample is valid1c。
2. The system of claim 1, wherein the first predetermined duration is about 60 days.
3. The system of claim 1, wherein the first predetermined duration ranges from 45 days to 75 days.
4. The system of claim 1, wherein the first predetermined duration ranges from 45 days to 90 days.
5. The system of claim 1, wherein the pre-processing of the data for each patient comprises:
converting plasma BG to whole blood BG mg/dl;
converting BG measured in mg/dl to mmol/l units; and
and calculating the low glycemic index and the high glycemic index.
6. The system according to claim 1, wherein each patient is data pre-processed using a predetermined mathematical formula defined as:
converting plasma BG to whole blood BG by BG ═ PLASBG/1.12, where BG is measured in mg/dl and PLASBG is plasma glucose;
converting BG measured in mg/dl to mmol/l units by BGMM ═ BG/18; and
calculating the low and high glycemic indices using predetermined mathematical formulas defined as follows:
Scale=[ln(BG)]1.08455.381, wherein BG is measured in mg/dl,
Risk1=22.765(Scale)2wherein
RiskLO RiskO 1 if BG <112.5, there is a Risk of LBGI otherwise RiskLO 0, and
RiskHI-Risk 1, if BG is greater than 112.5, there is a Risk of HBGI, otherwise RiskHI-0,
BGMM 1-mean BGMM per patient,
RLO1 is the average RiskLO per subject,
RHI1 is the average RiskHI per subject,
l06 is the average RiskLO calculated for nighttime readings only, and if there are no nighttime readings, by default,
n06, N12, N24 are percentages of SMBG readings in each time interval,
NC1 — the total number of SMBG readings in the first predetermined duration; and
NDAYS is the number of days with SMBG readings in the first predetermined duration.
7. The system of claim 6 wherein N06, N12, N24 are percentages of SMBG readings in time intervals of 0-6: 59; 7-12:59 and 18-23: 59.
8. The system according to claim 6, comprising assigning a group a value according to the patient's high BG index calculated with a predetermined mathematical formula defined as:
if RHI1 ≦ 5.25 or if RHI1 ≧ 16, then group is designated 0,
if RHI1>5.25 and if RHI1<7.0, then group is specified as 1,
if RHI1 ≧ 7.0 and if RHI1<8.5, then group is designated 2, and
if RHI1 ≧ 8.5 and if RHI1<16, then group is designated 3.
9. The system of claim 8, comprising estimating using a predetermined mathematical formula defined as:
E0=0.55555*BGMM1+2.95,
E1=0.50567*BGMM1+0.074*L06+2.69,
E2=0.55555*BGMM1-0.074*L06+2.96,
e3 is 0.44000 ANG BGMM1+0.035 ANG L06+ 3.65; and
EST 2-E1 if Group is 1, or EST 2-E2 if Group is 2, or EST 2-E3 if Group is 3, otherwise EST 2-E0, where EST2 is an estimate of blood glucose level.
10. The system of claim 9, comprising further modifying the estimate using a predetermined mathematical formula defined as:
if default (L06), EST2 ═ E0,
if RL01 is less than or equal to 0.5 and RHI1 is less than or equal to 2.0, EST2 is equal to E0-0.25,
if RL01 is not more than 2.5 and RHI1 is greater than 26, EST2 is E0-1.5 ANG RLO1, and
if (RL01/RHI1) ≦ 0.25 and L06>1.3, EST2 — EST 2-0.08.
11. The system of claim 10, estimating HbA of the patient from BG data collected over a first predetermined duration1cThe method comprises the following steps:
HbA is paired using at least one of four predetermined mathematical formulas defined as described below1cPerforming the estimation:
a)HbA1cEST2 or corrected by a system according to claim 10
b)HbA1c(iii) 0.809098 TibMM 1+0.064540 TibRLO 1-0.151673 TibRHI 1+1.873325, wherein
BGMM1 is the average BG in mmol/l of the system according to claim 6
RLO1 is a Low BG index for a system according to claim 6
RHI1 is the high BG index of the system according to claim 6; or
c)HbA1c0.682742 ANG HBA0+0.054377 ANG RHI1+1.553277, wherein
HBA0 is a previous reference HbA taken during a second predetermined period prior to estimation1cReading therein, wherein
RHI1 is the high BG index of the system according to claim 6; or
d)HbA1c=0.41046*BGMM+4.0775
Wherein BGMM1 is the average BG in mmol/l of the system according to claim 6.
12. The system of claim 11, wherein the second predetermined duration is about 3 months.
13. The system according to claim 11, wherein the second predetermined duration ranges from 2.5 months to 3.5 months.
14. The system according to claim 11, wherein the second predetermined duration ranges from 2.5 months to 6 months.
15. The system of claim 11, wherein HbA is used only if the first predetermined duration samples meet at least one of the following four criteria1cEstimated sample selection criteria validation samples:
a) a test frequency criterion, wherein the first predetermined duration sample comprises an average of at least 1.5 to 2.5 tests per day;
b) the test frequency criteria may be selected so long as the predetermined duration samples comprise at least a third predetermined sample period having an average frequency of readings of about 1.8 readings/day;
c) data randomization Standard-1, where only the ratio RLO1/RHI1>When the measured value is 0.005, HbA is verified and displayed1cIs estimated by the estimation of (a) a,
wherein
RLO1 is a Low BG index for a system according to claim 6
RHI1 is the high BG index of the system according to claim 6; or
d) Data randomization Standard-2, where only the ratio N06 is used>Verification and display of HbA at 3%1cIs estimated by the estimation of (a) a,
wherein
N06 is the percentage of nighttime readings for the system according to claim 6.
16. The system of claim 15, wherein the third predetermined duration is at least 35 days.
17. The system of claim 15, wherein the third predetermined duration ranges from 35 days to 40 days.
18. The system of claim 15, wherein the third predetermined duration ranges from 35 days to as long as the first predetermined duration.
19. The system of claim 1, further comprising:
a BG acquisition mechanism for acquiring BG data from a patient.
20. The system of claim 11, wherein HbA is used only if the first predetermined duration samples meet at least one of the following four criteria1cEstimated sample selection criteria validation samples:
a) testing a frequency standard, wherein the first predetermined duration sample comprises measuring at least 1.5 times per day on average; and
b) data randomization Standard-1, where only the ratio RLO1/RHI1 is used>HbA was verified or indicated only when it became 0.0051cIt is estimated that the position of the target,
wherein
RLO1 is a Low BG index for a system according to claim 6
RHI1 is the high BG index of the system according to claim 6; or
c) Data randomization Standard-2, where only the ratio N06 is used>HbA is verified or indicated only when the percentage of HbA is 3%1cIt is estimated that the position of the target,
wherein
N06 is the percentage of nighttime readings for the system according to claim 6.
21. The system of claim 1 or 19, wherein the means for estimating HbA of the patient from the BG data collected1cThe system of (1), which is implemented without a previous HbA1cAnd (4) information.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US40297602P | 2002-08-13 | 2002-08-13 | |
| US60/402,976 | 2002-08-13 | ||
| US47837703P | 2003-06-13 | 2003-06-13 | |
| US60/478,377 | 2003-06-13 | ||
| PCT/US2003/025053 WO2004015539A2 (en) | 2002-08-13 | 2003-08-08 | Managing and processing self-monitoring blood glucose |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1084845A1 HK1084845A1 (en) | 2006-08-11 |
| HK1084845B true HK1084845B (en) | 2009-11-20 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2003259741B2 (en) | Managing and processing self-monitoring blood glucose | |
| CA2615575C (en) | Systems, methods and computer program products for recognition of patterns of hyperglycemia in a user | |
| CA2404262C (en) | Method, system, and computer program product for the evaluation of glycemic control in diabetes from self-monitoring data | |
| US11355238B2 (en) | Method, system and computer program product for evaluation of blood glucose variability in diabetes from self-monitoring data | |
| AU2001251046A1 (en) | Method, system, and computer program product for the evaluation of glycemic control in diabetes from self-monitoring data | |
| EP3544017A1 (en) | Diabetes management methods and systems | |
| US20140303995A1 (en) | Computer-Implemented System And Method For Facilitating Accurate Glycemic Control By Modeling Blood Glucose Using Circadian Profiles | |
| CN100466965C (en) | Methods, systems, and computer program products for processing self-monitoring blood glucose (SMBG) data to enhance diabetic self-management | |
| HK1084845B (en) | Managing and processing self-monitoring blood glucose | |
| HK40012661A (en) | Diabetes management methods and systems | |
| HK1112650B (en) | Diabetes management methods and systems | |
| HK1112650A (en) | Diabetes management methods and systems | |
| HK1177790B (en) | Diabetes management methods and systems |