Article Archives - Astrix https://astrixinc.com/category/article/ Expert Services and Staffing for Science-Based Businesses Tue, 17 Oct 2023 16:54:59 +0000 en-US hourly 1 Trends in Pharmaceutical Data Science – A Brief Look at Internally Structured Data in Medical Device Development https://astrixinc.com/article/trends-in-pharmaceutical-data-science-a-brief-look-at-internally-structured-data-in-medical-device-development/ Mon, 16 Oct 2023 16:06:08 +0000 https://astrixinc.com/?p=41366 Pharmaceutical and medical device companies benefit from using a structured approach to […]

The post Trends in Pharmaceutical Data Science – A Brief Look at Internally Structured Data in Medical Device Development appeared first on Astrix.

]]>

Pharmaceutical and medical device companies benefit from using a structured approach to data items, connections between data items, and data containers. Working with data that is internally structured reduces the time needed to create submission reports, eliminates the manual verification of large data tables, and ensures the creation of submission deliverables with no technical errors.

Leading medical device manufacturers have employed internal structure with data items, and connections between data items, in their product development processes to support initial health authority submissions and ongoing product lifecycle management. This article highlights some areas where the pharmaceutical industry may find inspiration from the device industry.


Read The Previous Article

previous article


Overview

In this second article of the series (1), we briefly examine some of the data and connections used by engineering and product development teams in medical device development. Many of the leading device manufacturers have employed processes and technologies to support a truly data-centric approach to their formal submissions as well as their internal reports. This includes managing data for risk, requirements, test, and other typical product development domains. As does the pharmaceutical industry, device manufacturers must comply with both health authority regulations and industry standards. Although the two industries use different data sets and follow different standards, they both benefit from using an approach to data and relationships using a rigorous, structured approach.

Data Items

Consider structure in the fundamental sense of how an individual data item is internally organized; the data item along with its attributes and connections.

As discussed in the previous article, a data item with internal structure is not simply an “entry or row in MS Excel.” It is a unique entity that exists with a set of attributes. It exists over time and tracks who uses it, if it can be changed, and who approves it. A data item can connect with other data items for dynamic links, which can be traced and displayed to the user. It may exist in many locations while still being a single data item. If something about the item changes, the change is seen everywhere the data item is used. It is a single data item being used in multiple locations (Fig. 1). These different locations may be data tables, prose sections of reports (where the data item is automatically inserted), documents, projects, and traces.

cognition blog image 1

Figure 1: Internally structured data item

This approach to structuring individual data items creates a foundation for data science initiatives in both formal and informal reporting projects. It provides the flexibility to use data in multiple formats and reports while ensuring data integrity and a trustworthy chain of custody throughout the life span of every data item. Most data items have a life span of years and using this approach to structure will support the many ways in which data is consumed, including in fully structured online filings to health authorities.

Risk Management

Risk Management activities in device development comprise several processes and deliverables. The foundational industry standard is ISO 14971 (2). The current version of this standard was released in 2019 and describes processes to identify, evaluate, and control risk for medical devices. In the general case, a list of hazards for the device is compiled along with sequences of events that may produce hazardous situations. The risk is assessed based on the probability of a hazardous situation occurring as a result of a sequence of events leading from the hazard, as well as the probability of a hazardous situation leading to a harm (see ISO 14971:2019 Annex C). A harm has a severity, and the overall risk is calculated based on probabilities of occurrence and the severity of harm. The risk is then mitigated by applying risk controls, which themselves may expose additional risks. Hazards, harms, situations, and other entities are captured as data items with specific, dynamic connections between the data items (Fig 2). If a data item is changed or updated, the connected data items can be updated or notify the user to review the new information. In an actual device development project, there can be thousands of risk items. Using well-structured data items for each element ensures that relevant connections are updated and notified when changes occur, eliminating the possibility of a user overlooking an update to the design.

risk management

Figure 2: Medical device risk data items and connections

A common risk activity performed early in device development is a Preliminary Hazard Analysis (PHA). It is often used to identify, and then categorize, hazards that may be associated with the use of a device. In many cases, a PHA uses prompts or questions to drive the process. ISO 24971 (3) includes an Annex A with a series of questions that are useful when performing a PHA. To perform a PHA using a data item approach with structure and connections, device companies begin by prompting the user with a question and then guiding the user through a series of steps to complete the exercise. Each step of the process generates a data item and connects the data item to other items in the “row” of the risk activity (Fig. 3). Probabilities of occurrence and severities can both be numeric attributes on data items, leading to numeric values for risk and residual (post mitigation) risk. When a change is made, numeric calculations automatically update. This ensures data integrity, especially when a risk activity has hundreds of “risk rows” and thousands of risk data items. Note also that there are connections between risk mitigations (controls) and device requirements. The manufacturer needs to track that a risk control has been (or will be) implemented in the design. The data items used to describe the design are requirements. This means that there is an inherent connection between risk and requirements data in a device development project.

cognition article image 3

Figure 3: Medical device PHA example structured “risk row”

This approach also uses data items and connections that are external to the specific PHA activity. For example, when the user enters a hazard or a harm, it is done by selecting from preapproved repositories of hazards and harms (red boxes in Fig. 3). Using this data-centric, structured approach has four advantages over an unstructured approach:

  1. Approved hazards and harms already exist in the repositories and cannot be “made up” by the user; all projects draw from the same approved repositories (there may be different repositories for different product categories, families, etc.);
  2. Harms in a repository include severity ratings that are approved by the Chief Medical Officer and cannot be changed by a user in a PHA activity; these ratings can be numeric, so an overall risk can be numeric and not just a string or a piece of text;
  3. Approved changes to repositories trigger notifications to PHA activities that review and update actions are required; if the severity of harm is updated in the repository, it triggers review of the risk activity wherever that harm is used;
  4. Items in repositories are reused in many different projects; if an update is made to a repository, it updates (or triggers review) in all projects where the items are used.

In pharmaceutical applications, Criticality Analysis (CA) is also a risk management activity. Even though language and approaches may differ from device development, this activity will still benefit from using data-driven approaches combining individual data items with structure, reusable data repositories, and dynamic connections between data items (Fig. 4). In this case, the CA may begin with identification of certain causes. These causes may be Critical Process Parameters, Critical Material Attributes, or other items. The items may reside in a specification document, recipe, or other repository, along with the identification of Quality Attributes (QA) and especially Critical Quality Attributes (CQA). The user in a CA activity would draw from the repositories in the same way that a user in a device PHA would draw from their repositories. The same benefits apply to a CA activity as they do to a PHA activity.

cognition article image 4

Figure 4: Pharmaceutical Criticality Analysis example “risk row”

Requirements and Test Management

Health authorities have defined certain regulations regarding how device manufacturers must document and report on requirements and testing for medical devices. In the United States, FDA has an existing regulation regarding Design Controls. Manufacturers are required to have procedures in place to manage three broad categories of requirements: needs, inputs, and outputs (4). They are also required to conduct and document comprehensive design validation and design verification testing on the device. Validation is used to ensure the device satisfies the documented needs and intended uses of the device. Verification is used to ensure the device “design outputs” satisfy the “design inputs.” As a result, manufacturers must manage all requirements and tests for the device as well as the relationships between the requirements and tests.

Requirements in a device development project are often created by a combination of methods including directly authoring a new requirement, importing a requirement from another tool, and reusing an existing requirement from another project or repository of requirements. The requirements are organized into multiple levels. For example, certain design input requirements satisfy certain user needs and certain design outputs are allocated to higher level design inputs. Although FDA mentions three specific levels (needs, inputs, and outputs), in reality there may be many levels of requirements. For example, design inputs may be tiered into multiple levels: system, subsystem, software, etc. The result is often a large set of requirements organized into a hierarchical structure with needs at the top and outputs at the bottom (Fig. 5).

Cognition article 6

Figure 5: Medical device requirements

In addition to requirements, device manufacturers author, execute, and manage tests of many types. Tests (also sometimes known as test cases or test methods) are associated with one or more requirements to provide evidence of success for both validation and verification of needs, inputs, and outputs. Testing is rarely conducted only one time. In reality, multiple test executions are run over time to monitor and report on the trends during development. This leads to large data sets where each requirement in a project may have many test execution runs (Fig. 6).

cognition article 6

Figure 6: Medical device tests

Note that for some time, FDA has stated its intention to modernize their Quality System Regulation (820) to align more closely with ISO 13485 (5). If or when that happens, manufacturers will still be required to satisfy similar definitions of requirements, tests, and their relationships.

Trace Matrices

One of the most important tools used to help device manufacturers understand their data and connections during development is the trace matrix. Traces are used in many areas to show schematic relationships between risks, requirements, tests, documentation, and other data items and containers. Documents are included in the list because, in a data-centric approach to development, they are technically structured containers of structured data and relationships.

There are innumerable traces possible. A simple example is a trace showing each need (the high-level requirements) and the lower-level inputs that satisfy each need. Such a trace shows any needs for which there are no lower-level inputs. Transposing the trace, it is apparent if any inputs do not satisfy any needs and may therefore be considered “orphaned.” Traces are easy to generate when using the data-centric approach because dynamic connections between data items were created throughout the development process. The word dynamic is important because it implies that connections may change over time. If the need that had no lower-level input satisfying it is updated to have one or more connected inputs, the trace updates accordingly. This is one of the most powerful results of the structured approach; it leads to the ability to generate traces automatically and dynamically update traces over time. Such an approach dramatically reduces resource time to generate important traces and eliminates the need for manual/visual verification of trace accuracy. The data model can, of course, be expanded to include items for BOM, clinical data, and any other data that connects to development items. The fundamental items in the model are risks, requirements, tests, documents, and the connections between them (Fig. 7).

cognition article 7

Figure 7: Medical device development data model

A trace may be centered on a particular data item. It may show a design input requirement together with whatever connections are desired. For example, a requirement may show the higher-level needs it satisfies, the lower-level requirements associated with it, its tests, current test execution results, risk controls that it implements, and any failure modes defined for the requirement (Fig. 7). This type of trace puts a focus on one specific item and displays its connections.

cognition image8

Figure 8: Medical device data item trace

Another type of trace is an overview trace showing a series of data items with some level of hierarchical order and other related connections. In this situation, the user may need to see a list of the needs in a device project along with evidence of any existing validation tests for the needs. For each need, the trace may then show the next level of requirements (inputs) that satisfy each need, along with any tests or risks (failure modes) for the inputs. Then, the trace displays the ultimate design outputs for each design input, again with any potential risks and tests.  Each design output may also display if it is considered an essential output, something required by 820.30: “…and shall ensure that those design outputs that are essential for the proper functioning of the device are identified (6).”

cognition image 9

Figure 9: Medical device overview trace

Any number of traces can be defined one time and then used multiple times in any project. Traces automatically populate and dynamically update. Referring back to Figure 7, what a trace does is traverse the lines between the circles and rectangles. They can start at any point and follow any path. Traces are an almost “free result” of using the data structure approach described in this article.

Summary

This article has covered some of the basics of using internally structured data for authoring, connecting, and tracing common data items used in medical device product development. The pharmaceutical industry has different data items and connections, but must also use such a data-centric approach to benefit from the many advantages that have been demonstrated with devices.

The power is evident with the ability to manage and render data in various ways, over a period of time, while not altering source data. This fundamental aspect of internally structured data is what gives it the power to handle all types of reporting, whether device or pharmaceutical. In the next article, we will examine examples of data mapping and tracing from both medical device and pharmaceutical and show how they have more in common than many may realize.

Author Information

cronin

About Cognition Corporation

Cognition Corporation, headquartered in Lexington, Massachusetts, develops, sells, and supports product development and compliance solutions for the pharmaceutical and medical device industries. Its Software-as-a-Service solutions help meet regulations faster with real-time traceability, guided design controls, and “change once, update everywhere” functionality–turning manual and disconnected data into streamlined, structured submissions that enable them to get to market faster.

Visit www.cognition.us.

About Astrix

Astrix is the unrivaled market leader in creating & delivering innovative strategies, technology solutions, and people to the life science community. Through world-class people, process, and technology, Astrix works with clients to fundamentally improve business, scientific, and medical outcomes and the quality of life everywhere. Founded by scientists to solve the unique challenges of the life science community, Astrix offers a growing array of fully integrated services designed to deliver value to clients across their organizations. To learn the latest about how Astrix is transforming the way science-based businesses succeed today, visit www.astrixinc.com.


References

1- Astrix, Inc.

Introduction to Pharmaceutical Data Items and Their Structure

Accessed October 12, 2023

https://astrixinc.com/introduction-to-pharmaceutical-data-items-and-their-structure/


2 – ISO

ISO 14971:2019 – Application of risk management to medical devices

December 2019

https://www.iso.org/standard/72704.html


3- ISO

ISO/TR 24971:2020 – Guidance on the application of ISO 14971

June 2020

https://www.iso.org/standard/74437.html


4 – FDA

21 CFR 820.30 – Design controls.

Accessed October 12, 2023

https://www.ecfr.gov/current/title-21/chapter-I/subchapter-H/part-820/subpart-C/section-820.30


5 – FDA

FDA Proposal to Align its Quality Systems with International Consensus Standard Will Benefit Industry and Other Regulators

February 2022

https://www.fda.gov/international-programs/international-programs-news-speeches-and-publications/fda-proposal-align-its-quality-systems-international-consensus-standard-will-benefit-industry-and


6 – FDA

21 CFR 820.30 – Design controls.

Accessed October 12, 2023

https://www.ecfr.gov/current/title-21/chapter-I/subchapter-H/part-820/subpart-C/section-820.30

The post Trends in Pharmaceutical Data Science – A Brief Look at Internally Structured Data in Medical Device Development appeared first on Astrix.

]]>
Introduction to Pharmaceutical Data Items and Their Structure https://astrixinc.com/article/introduction-to-pharmaceutical-data-items-and-their-structure/ Wed, 06 Sep 2023 16:41:25 +0000 https://astrixinc.com/?p=35903 Structuring the data used in pharmaceutical reporting will reduce the time needed […]

The post Introduction to Pharmaceutical Data Items and Their Structure appeared first on Astrix.

]]>
Structuring the data used in pharmaceutical reporting will reduce the time needed to create reports, eliminate the manual (human) verification of data tables, and ensure error-free submissions. The use of structured data is particularly powerful in Module 3, which includes large, complex sets of data related to product quality. Using a structured approach to data from source to submission improves data integrity, follows ALCOA+, and is a foundation for working towards the ISPE Pharma 4.0™ initiative[i].

Throughout the CGMP data life cycle, CDER and CBER stress the importance of data integrity. “…data integrity refers to the completeness, consistency, and accuracy of data. Complete, consistent, and accurate data should be attributable, legible, contemporaneously recorded, original or a true copy, and accurate (ALCOA). Data integrity is critical throughout the CGMP data life cycle, including in the creation, modification, processing, maintenance, archival, retrieval, transmission, and disposition of data after the record’s retention period ends. System design and controls should enable easy detection of errors, omissions, and aberrant results throughout the data’s life cycle.”[ii]

With internal structure, each data item is unique, related to other data items, and can be part of an overall taxonomy or ontology[iii]. A structured data item can be reused in many places and still be a single item. If something about the item changes, the change will be seen “everywhere” because it is a single, unique data item used in more than one location.

Overview

In this first article of the series, we use the example of drug stability reporting to consider problems with a lack of internal structure at the data item level and how structure can reduce or eliminate those problems. As part of the analytical development domain of Chemistry, Manufacturing, and Controls (CMC), stability testing generates large data sets that are used in complex data tables. CMC reports, including stability, are included in Module 3 (Fig. 1) of the Common Technical Document (CTD)[iv].

 

CTD PyramidFigure 1: The Common Technical Document (CTD)

CMC is responsible for assuring the product sold will have quality attributes similar to those of the product demonstrated to be safe and effective and that the product quality is consistent and meets appropriate standards. A key outcome of CMC work is the confidence that the drug described on a label is exactly the drug used by a patient.

Critical CMC Elements:

  • Testing of raw materials
  • Where/how the product is manufactured
  • Monitoring relevant test methods for quality
  • Product quality and consistency (and their controls)
  • Identify/manage product quality attributes
  • The shelf life of the product

Drug Stability Testing

Pharmaceutical products, both large molecule and small molecule, require many types of testing to demonstrate their safety, efficacy, and overall quality. One type of testing is for drug stability. This testing is essential because it is used to define the storage conditions and shelf life of a product. Drug stability testing generates large, complex data sets that are time-consuming to generate, tedious to verify, and prone to errors.

Stability is defined as the extent to which a product retains, within specified limits, and throughout its period of storage and use (i.e., its shelf life), the same properties and characteristics that it possessed at the time of its manufacture.[i]

The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) has established guidelines for drug stability testing. Guideline Q1A(R2), Stability Testing of New Drug Substances and Products, outlines the requirements for stability testing during the product development process, including the design of stability studies, the selection of appropriate test conditions, and the establishment of shelf life and storage conditions. “The purpose of stability testing is to provide evidence on how the quality of a drug substance or drug product varies with time under the influence of a variety of environmental factors, such as temperature, humidity, and light, and to establish a retest period for the drug substance or a shelf life for the drug product and recommended storage conditions.”[ii]

Stability testing is conducted in preclinical, clinical, and technology transfer/commercialization stages (as well as annually, once a product is commercialized) to establish and maintain an accurate assessment of the product. Data is used to document and demonstrate the product’s overall stability profile. The goal of stability testing is to monitor any changes in properties over time under various environmental conditions, such as temperature, humidity, and light exposure.

Testing for stability generates large data sets. Consider the example of a drug that is to be produced in two strengths and three batches for each strength. The drug will be packaged in three different container types. Samples are taken for each combination and then stored for testing over a period of time. The samples are stored under specified storage conditions and several tests are performed on each sample at designated time points. This testing can result in ~20,000 data points for a stability report (Fig. 2). These results must be entered into carefully controlled and formatted data tables which must be verified, approved, and released for both internal reports and formal submissions.

data points

Figure 2: Stability testing generates large data sets

The complete drug stability report for the product includes more than large data tables. It may also include narrative content, prose, statistical analysis, and graphics. All parts of the report are time-consuming to create but it is the heavy lifting of the large, complex data tables that drives the significant resource requirements to successfully complete such work for CMC and regulatory teams.

Many organizations have excellent tools in their R&D facilities, laboratories, and manufacturing plants. They also have well-documented quality and business processes. All of this helps generate large amounts of valuable data. However, many companies still rely on humans to transcribe, aggregate, and format the data for use in the critical documents and reports required for submission to health authorities. For many CMC reports, especially drug stability, the manual creation of large data tables entails errors.

Critical Quality Attributes

Many pharmaceutical companies are good at storing data but are challenged to use that data to solve complex Critical Quality Attribute (CQA) problems. Data is not always well-structured to support technology transfer, filings, and annual reporting. The process of moving data from source systems to submission is a major burden on CMC and often consumes 15-20% of FTE time (transcribing, aggregating, analyzing, and reporting). CMC costs increase even higher with the extra work caused by manual data verification, error correction, and resubmission efforts. Many people, using many tools, “touch” data in an uncontrolled fashion which reduces credibility with health authorities (Fig 3). The continuous looping through people and tools adds time and the opportunity for error.

Figure 3: Many people, using many tools, leads to questionable results

In many cases, a single piece of data is used in multiple locations. A quality attribute defined in a Specification Document may be used in a Justification Of Specifications report or in a Criticality Analysis worksheet. Copying and pasting data between different tools is dangerous because each copy is now its own independent entity. Traceability is nearly impossible since it requires a manual, or visual, inspection to confirm exactly “what is connected to what” and that a change to an item in one location is also changed on the item in all of the other locations where is it used (Fig. 4). This unstructured approach is ripe for errors and omissions. Pharmaceutical scientists should be spending time doing science, but they end up spending significant time organizing, finding, changing, tracing, and verifying data for critical CMC reports.

tangled web

Figure 4: What a tangled web is woven with unstructured approaches to CMC data

 

Reducing Errors in CMC Reporting

Errors in Module 3 reports cause delays in filings. A recent filing was submitted to a health authority with ~140 errors in Section 3.2 tabulated data. The stability data was incorrectly reported to the health authority. There were more than 75 transcription errors for real time, stressed, and accelerated testing found in 3.2.S.7.3 and 3.2.P.8.3. These errors occurred during transcription of data points from source (LIMS) to the spreadsheets used by the CMC team. The result was a delay of more than 45 days, with significant lost revenues for the product.

There is inherent risk in the manual transcription of 20,000 data points and the manual creation of data tables using unstructured data approaches. Even with strong procedural controls for manual data verification, the sheer volume of data handled causes problems. Such errors bring into question the overall data integrity of the program and reduce credibility with health authorities. Credibility is hard to earn and easy to lose.

Similar problems occur with other unstructured analytical reports. For example, a Batch Analysis report may have 4,500 data points, manually entered into MS Excel™. It is common to find 110-115 data errors (~2.5%), dozens of transcription errors, and multiple calculation errors. Again and again, the same underlying causes are responsible for errors:

  • 100% manual data entry
  • No data locks
  • Manual (often visual!) data verification
  • Manual calculations
  • Manual data formatting
  • Manual fix of errors (resulting in more manual verification)

To reduce errors in CMC reports, we must reduce manual data touches. Better yet, we should eliminate manual data touches altogether from the entire process. Automating the process of moving data from source systems to submission, while adhering to a standardized convention for naming, taxonomies, ontologies, and data item internal structure will reduce the overall resource needs to generate reports, reduce the time for data verification, and eliminate data errors. Use a structured data approach to automate the creation of all data tables.

Structuring Individual Data Items

The definition of the phrase “structured data” can vary based on the audience and use scenarios. In pharmaceutical data science, structured data is often associated with following a standardized taxonomy or ontology to ensure consistency across all platforms and uses. Groups like the Allotrope Foundation[i] and others work to standardize ontologies within and across industries. For this article, we consider structure in the fundamental sense of how an individual item is internally organized; we define a data item along with its attributes and connections. Additional articles in this series discuss moving from the individual data item to sets of data items, data transcription and aggregation, and automating the creation of complex data tables for electronic pharmaceutical reports.

A data item with internal structure is not simply an “entry or row in MS Excel™.” It is a unique entity that exists with a set of attributes. It exists over time and tracks who uses it if it can be changed, and who approves it. A data item can connect with other data items for dynamic links, which can be traced and displayed to the user. It may exist in many locations while still being a single data item. If something about the item changes, the change is seen everywhere that the data item is used. It is a single data item being used in multiple locations (Fig. 5). These different locations may be data tables, prose sections of reports (where the data item is automatically inserted), projects, and traces.

definition of structure

Figure 5: Definition of structure in a data item

This approach to structuring individual data items creates a strong foundation for data science initiatives in both formal and informal reporting for all modules of the CTD. It provides the flexibility to use data in multiple formats and reports while ensuring data integrity and a trustworthy chain of custody throughout the life span of every data item. Most data items have a life span of years and using this approach to structure will support the many ways in which data is consumed, including in fully-structured online filings to health authorities.

Structured Data Approach Example: Purity Analysis

Using stability again as an example, this structured approach is easily understood. Teams conduct studies to perform analysis on many analyses, or parameters, over time (the word “parameter” is used here as a general way to describe what is being tested). Each test result includes many required entries. In this very simplified example (Fig. 6), there are two measurements conducted for Microbial Content, which itself is part of the overall Purity: C. difficile and E. coli. To structure this data, it must be organized in a meaningful way that can be used for reporting and other uses. Having the measurement data in rows like MS Excel™ is not structured and leads to the problems discussed earlier. The most useful, and powerful, way to create structure at this level is to use the analysis and measurements as the foundational data items leading to structured electronic reporting.

ecoli graph

Figure 6: (simplified) Example of stability testing data

Purity is an analysis. It is not directly tested with a single measurement. Instead, it is broken down into groups of measurements, the results of which all contribute to the overall purity. There is variation in industry nomenclature; don’t let the words get in the way of the structure and organization of the stability data.

Purity (analysis)

Microbial Content

  1. difficile (measurement)
  2. coli (measurement)

Physical (analysis)

          Appearance (measurement)

          Size (measurement)

Friability (measurement)

All measurements (test results) for an analysis include more detail than just the actual results. For example, they are associated with a particular sample, held under certain storage conditions, coming from a particular manufacturing site, and were tested at a certain time point. All of this information will become attributes on the structured data item (Fig. 7).

stability analysis

Figure 7: Structure of a Stability Parameter

Using this approach, the Purity data item contains all of the attributes needed to render it useful and reliable for reporting. The same is true for every other analysis, and measurement, in a stability study. The results from testing are included with Purity in its attributes. They are captured as entered during the study. They are secure because Purity is an internally structured data item with access control, an audit log, and a complete chain of custody to ensure data integrity.

As a structured data item, Purity can be told to render its attributes at different locations in different formats. For example, a particular report may call for reporting any Microbial Content result that is less than or equal to .5 as “≤.5.” The actual measurement results in our example from Figure 6 have values of .45 and .3, which are never changed (no altering of source data!) An internally structured data system allows the user to designate that, for a certain report, any Microbial Content result of less than or equal to .5 must be displayed as ≤.5 (Fig. 8, 9). It is a simple setting that can “stick” for a report while not altering the actual entered value for the measurement.

data system

Figure 8: Setting a display value for a measurement in a structured data system

(from Cognition Corporation’s Drug Stability Reporting Application)

stability table

Figure 9: Stability table displays formatted values as required without changing source values

(from Cognition Corporation’s Drug Stability Reporting Application)

Microbial Content measurements may display with different formats in different reports as needed. The source value is always retained by the structured data item. Any report can reset the display values of Microbial Content back to the original at any time. Although all values in the above stability table that are less than .5 are displayed as ≤.5, changing that display setting will remove the formatting and display the original entered values (Fig. 10, 11). This is just some of the power of using internally structured data items for pharmaceutical reporting.

data formatting

Figure 10: Set formatting back to default to display original (source) data value

(from Cognition Corporation’s Drug Stability Reporting Application)

stability results

Figure 11: Stability table displays original (source) data values

(from Cognition Corporation’s Drug Stability Reporting Application)

A system using internally structured data items also supports dynamic stability, and other, reports. For example, the system may import initial source data from LIMS or some other system. The source data (Fig. 12) may have a certain number of Time Points and may be missing measurement data.

condition table

Figure 12: Initial stability testing values in source system

On import, all data is created in the structured system. The system detects missing values and can add NT, NR (Fig. 13), or some other entry to flag that there is missing data.

Figure 13: Structured system flags missing data

 At a later point in time, the source data is updated. There may be a change to a previous value, missing data filled in, and new Time Points added (Fig. 14). This is common behavior as stability testing continues over sometimes long periods of time.

Figure 14: Source system has changed since previous import

The structured system can interrogate the source system and perform another import. At this point, the structured system detects the changed value for 1 Month, the value that was previously missing for 3 Months, and the new value for 9 Months (Fig. 15).

Figure 15: Structured system detects, and handles, changes over time from source data

 

The structured system can be adjusted to handle such an update based on the needs of the project at any particular time. For example, the system may flag that at 3 Months, there was an original value from the source and now a new value. Both values can be retained and the user can decide which value to use in a data table report. The structured system can add the value for 3 Months that was previously missing. It can also add the new Time Point for 9 Months.

The structured system handles new, or changing, source data by either merging or overwriting as the source updates. This is important because stability reports need to be dynamic and update over time.

Summary

This article has covered some of the basics of using internally structured data for stability reporting. The power is evident with the ability to manage and render data in various ways, over a period of time, while not altering source data. This fundamental aspect of internally structured data is what gives it the power to handle all types of pharmaceutical reporting, both analytical and process. In the next article, we will look at some similarities and differences between pharmaceutical and medical device data and how the pharmaceutical industry can take advantage of work done by the device industry using this approach to internally structured data.

 

About Cognition

Cognition Corporation, headquartered in Lexington, Massachusetts, develops, sells, and supports product development and compliance solutions for the life sciences industry. Its Software-as-a-Service solutions help meet regulations faster with real-time traceability, guided design controls, and change once, update everywhere functionality – turning manual and disconnected data into streamlined, structured submissions that enable them to get to market faster.

About Astrix

Astrix is the unrivaled market leader in creating & delivering innovative strategies, technology solutions, and people to the life science community. Through world-class people, process, and technology, Astrix works with clients to fundamentally improve business, scientific, and medical outcomes and the quality of life everywhere. Founded by scientists to solve the unique challenges of the life science community, Astrix offers a growing array of fully integrated services designed to deliver value to clients across their organizations. To learn the latest about how Astrix is transforming the way science-based businesses succeed today, visit www.astrixinc.com.


References

[1] ISPE  Pharma 4.0 Accessed 1 September, 2023

https://ispe.org/initiatives/pharma-4.0

[2] FDA  Data Integrity and Compliance With Drug CGMP December, 2018

https://www.fda.gov/media/119267/download

[3] Allotrope Foundation  Allotrope Framework Accessed September 1, 2023

https://www.allotrope.org/allotrope-framework

[4] FDA Guidance for Industry M4Q: The CTD – Quality August, 2001

https://www.fda.gov/media/71581/download

[5] USP Stability Considerations in Dispensing Practice (1191) Accessed 1 September, 2023

http://www.uspbpep.com/usp32/pub/data/v32270/usp32nf27s0_c1191.html

[6] ICH  STABILITY TESTING OF NEW DRUG SUBSTANCES AND PRODUCTS Q1A(R2): 1.3

6 February, 2003

https://database.ich.org/sites/default/files/Q1A%28R2%29%20Guideline.pdf

[7] Allotrope Foundation Accessed 1 September, 2023

https://www.allotrope.org

The post Introduction to Pharmaceutical Data Items and Their Structure appeared first on Astrix.

]]>
Green Tech for Greener Biobanking: How Biobank Software Leads the Decarbonization Drive https://astrixinc.com/article/green-tech-for-greener-biobanking-how-biobank-software-leads-the-decarbonization-drive/ Fri, 25 Aug 2023 20:13:44 +0000 https://astrixinc.com/?p=34321 The need for decarbonization is becoming increasingly urgent as the world grapples […]

The post Green Tech for Greener Biobanking: How Biobank Software Leads the Decarbonization Drive appeared first on Astrix.

]]>
The need for decarbonization is becoming increasingly urgent as the world grapples with the severe consequences of climate change. With growing emphasis on reducing carbon emissions across various industries, it is inevitable that the energy-intensive nature of biobanks, responsible for storing valuable biological samples, comes under scrutiny. Biobanks, essential repositories of biological materials, play a critical role in scientific research, healthcare advancements, and drug development. However, their operations, often relying on energy-intensive cooling systems, freezers, and high-tech equipment, contribute significantly to carbon emissions.

In this article, we delve into the challenges and opportunities for decarbonization in biobanking.

The Overlooked Sustainability Challenge in Biobanks

In recent decades, biobanks have experienced exponential growth, leading to a significant surge in their storage requirements. This remarkable expansion has raised environmental concerns, but surprisingly, the concept of sustainability within biobanks has not adequately incorporated environmental considerations. A 2022 study by Taylor & Francis Group revealed a lack of awareness among key stakeholders, including researchers utilizing biobank resources and digital sustainability experts. Moreover, environmental sustainability policies within biobanks are virtually non-existent despite their status as public goods and their reliance on public funding. This concerning gap in policies and discussions neglects the substantial ecological impact of biobanks, which demands thoughtful evaluation in terms of governance and overall sustainability.

Decarbonization in Biobanking: The Roadblocks

The primary hurdle in achieving decarbonization within biobanking closely relates to the task of cutting down energy usage. Most biobanks rely on electricity-powered ultracold freezers; consequently, the central concern is about reducing the electric energy consumption of deep freezers, such as those operating continuously at approximately -80°C.

Another energy-intensive aspect of biobanking pertains to the utilization of liquid nitrogen (LN2). The rationale behind opting for LN2 instead of electricity for long-term storage of biological samples lies in its capacity to achieve lower temperatures, creating a highly stable ultralow-temperature environment. However, the challenge lies in the limited availability of information regarding energy consumption specifically related to regular LN2 usage, although other aspects of LN2 consumption, such as its safety for staff in terms of occupational accidents, are well-documented. To effectively plan, execute, and assess decarbonization efforts in biobanking, it is crucial to gain a comprehensive understanding of the LN2 scenario within the industry.

The challenges faced by Low-and-Middle-Income Countries (LMICs) are distinct from those in High-Income Countries (HICs) because LMICs generally have less well-equipped facilities. Cost considerations drive their preferences, and acquiring high-quality deep freezers poses a significant financial challenge for LMIC biobanks. Low-cost freezers that consume more electricity and release more heat are the norm in these countries. These low-cost freezers often do not comply with CFC/hydrochlorofluorocarbon (CFC/HCFC) regulations and do not use environmentally friendly reagents.

Opportunities for Decarbonization in Biobanking in LMICs and HICs

LMIC biobanks can pursue decarbonization despite the obstacles they face. Implementing intelligent strategies and adopting behavioral changes can promote eco-conscious practices. For instance, implementing a just-in-time model can effectively decrease the need for long-term sample storage in LMIC biobanks. This approach enables these biobanks to focus on prospective collections and short-term storage while actively collaborating with stakeholders to fulfill their specific needs. Another frugal way to support decarbonization involves using applications that enable the pooling of samples from various locations by a single individual or vehicle. LMIC biobanks may also explore the adoption of diverse supplementary actions. These could entail the use of energy-saving office lighting, optimizing heating and cooling systems, promoting a paperless environment within the biobank, and incorporating an energy star rating as a purchasing criterion for equipment to encourage manufacturers to prioritize energy-efficient designs for future equipment.

HICs, on the other hand, have the capability to embrace advanced technologies that offer higher energy and LN2 efficiency. These technologies should undergo rigorous testing and, upon successful validation, must be gradually integrated into existing biobanking facilities and networks. Additionally, HICs need to develop methods that enable the storage of larger quantities of samples at room temperature, effectively reducing the heating and cooling requirements for equipment.

Figure 1: A diagrammatical depiction of the decarbonization opportunities in biobanking (Figure courtesy of CloudLIMS).

cloudlims biobanking

How Can Biobank Software Help Decarbonize Biobanks?

A biobanking LIMS can help biobanks mitigate their environmental impact. First and foremost, it eliminates the need for paper-based records and documentation. This transition to digital documentation decreases reliance on paper, thereby reducing the carbon footprint associated with paper production, transportation, and disposal. With the adoption of digital documentation and biobank software, biobanks can effectively minimize their ecological footprint while simultaneously enhancing the record-keeping efficiency of biobanks. Furthermore, employing biobank software that operates on the public cloud, such as AWS and Google Cloud, enables multi-tenancy and sharing of resources across various users, promoting seamless real-time collaboration and diminishing individual carbon footprints.

Figure 2: A biobanking LIMS to manage samples and associated metadata digitally (Figure courtesy of CloudLIMS).

Conclusion

Despite the increasing awareness and actions taken to reduce carbon footprints in other sectors, the conversation about environmental sustainability in biobanking remains relatively absent. As the world strives to address the environmental challenges and transitions towards a sustainable future, it is crucial for the biobanking community to actively engage in discussions about adopting greener practices, optimizing energy consumption, and exploring eco-friendly alternatives without compromising the integrity and preservation of valuable samples. There’s no denying that challenges exist, especially in LMICs, where decarbonization might entail additional financial costs they may not be able to afford. However, with some ingenuity and behavioral and operational changes, and embracing a cloud-based biobank software and going paperless, LMICs can also work towards decarbonization. Embracing sustainable practices and adopting innovative technologies are essential steps in ensuring a greener and more sustainable future for biobanks. By embracing sustainability in biobanking, the industry can not only contribute to the broader global decarbonization efforts but also ensure the long-term viability and societal benefits of these invaluable repositories of biological resources.

About CloudLIMS:

CloudLIMS.com is an ISO 9001:2015 and SOC 2-certified informatics company. Their SaaS, in-the-cloud Laboratory Information Management System (LIMS), CloudLIMS, offers strong data security, complimentary technical support, instrument integration, hosting and data backups to help biorepositories, analytical, diagnostic testing and research laboratories, manage data, automate workflows, and follow regulatory compliance such as ISO/IEC 17025:2017, GLP, 21 CFR Part 11, HIPAA, ISO 20387:2018, CLIA, ISO 15189:2012, and ISBER Best Practices at zero upfront cost. Their mission is to digitally transform and empower laboratories across the globe to improve the quality of living.

About Astrix:

Astrix partners with many of the industry leaders in the informatics space to offer state of the art solutions for all of your laboratory informatics needs. Through world-class people, process, and technology, Astrix works with clients to fundamentally improve business, scientific, and medical outcomes and the quality of life everywhere. Founded by scientists to solve the unique challenges of the life science community, Astrix offers a growing array of fully integrated services designed to deliver value to clients across their organizations.

About The Author:

Montserrat Valdes is a highly skilled chemical engineer with a diverse background in research and industry. She holds a Master of Science degree in Chemical Engineering from the University of Saskatchewan and an Analytical Chemistry Diploma from the National Autonomous University of Mexico. Currently, Montserrat is working with CloudLIMS.com as a Scientist.

Montserrat is an experienced chemist with expertise in the analysis of cannabis and nicotine-containing products. As a QC Chemist and Analytical Chemist, she has conducted numerous accredited testing methods to ensure the quality and compliance of cannabis products. Her experience also includes validating analytical testing methods and operating, calibrating, and troubleshooting a variety of analytical instruments, including HPLC-FLD/DAD/VWD, IC, and LC-MS.

In addition to her work as a chemist, Montserrat has also served as a Research Engineer, successfully coordinating various environmental projects of great importance. These projects include the simultaneous capture of NH3 and H2S using nanoparticles, the biodegradation of surrogate naphthenic acids, and the adsorptive removal of antibiotics from livestock waste streams.

Montserrat has also made significant contributions to scientific literature through her research articles and conference presentations. Her publications in journals such as the Journal of Environmental Chemical Engineering and Bioprocess and Biosystems Engineering highlight her expertise in topics ranging from nanotechnology applications to biodegradation and wastewater treatment.

The post Green Tech for Greener Biobanking: How Biobank Software Leads the Decarbonization Drive appeared first on Astrix.

]]>
Breaking the Cycle: Effective Strategies for Cannabis Testing Labs and Regulators to Address THC Inflation and Lab Shopping https://astrixinc.com/article/breaking-the-cycle-effective-strategies-for-cannabis-testing-labs-and-regulators-to-address-thc-inflation-and-lab-shopping/ Tue, 25 Jul 2023 22:03:40 +0000 https://astrixinc.com/?p=28726 The legalization of cannabis in different states in the US has resulted […]

The post Breaking the Cycle: Effective Strategies for Cannabis Testing Labs and Regulators to Address THC Inflation and Lab Shopping appeared first on Astrix.

]]>

The legalization of cannabis in different states in the US has resulted in a rapid expansion of the industry. It is predicted that the regulated cannabis industry will reach $82.3 billion by 2027. The primary factors contributing to the growth of the cannabis industry are the increasing acceptance and legalization of cannabis for both medical and recreational purposes. This trend is expected to continue as more states realize the potential advantages of legalizing cannabis, including generating more tax revenue, creating job opportunities, and providing medical benefits. The legalization of cannabis has also stimulated new research and development, leading to innovative new products. The expansion of this industry is being propelled by the growing popularity of cannabis commodities, such as edibles, oils, and tinctures, among consumers. This has resulted in a wide range of opportunities for both entrepreneurs and investors. Thus, we are observing the emergence of various cannabis-related enterprises, which include those engaged in the growing, processing, testing, and distribution of cannabis products, as well as those providing legal, financial, and consulting services.

The industry has a lot going for it. Nevertheless, despite the early successes, it has experienced some obstacles, one of which gained attention in 2022 – the problem of THC inflation. The issue has caused extensive laboratory shopping as cultivators strive to acquire the greatest THC concentrations in their products.

THC Inflation, Lab Shopping, and the Vicious Cycle

THC inflation refers to the act of falsely elevating THC levels in a sample to show a greater THC concentration than what actually exists. Strains with less than 10% THC are classified as low THC strains, while those with over 20% THC are considered high THC strains.

As a result of THC inflation, the lab shopping trend has emerged, in which unethical producers search for laboratories that falsely enhance THC levels. This activity has become so common that certain labs openly promote their services based on the high THC numbers.

Many people who use cannabis believe that products containing higher levels of THC always produce stronger effects. However, this is a mistaken belief, as the potency of a cannabis product cannot be determined solely by its THC content. The misconception has contributed to an exponential increase in demand for high-THC products. As a result, consumers are willing to pay more for these products. The emphasis on high-THC goods has given rise to fraudulent laboratories that deliberately inflate THC levels. Unethical producers are attracted to these laboratories, while ethical ones experience a decline in their business. This practice brakes consumer trust and undermines the industry’s credibility. If labs continue to allow the labeling of products with falsely high potency, customers will lose faith in the regulated market.

Breaking the Cycle: Addressing the Twin Trends of THC Inflation and Lab Shopping

The absence of standardized testing procedures in the industry is leading to mounting concerns about THC potency inflation. This is due to variations in the methodologies and equipment used by different labs, resulting in inconsistent test results. Unethical labs take advantage of this and report exaggerated THC levels. Moreover, the scope for manual intervention allows dishonest labs to manipulate results and deceive regulatory agencies.

Some of the ways to control THC inflation are outlined below:

  • It is crucial to establish a universal testing standard within the industry.
  • The same samples should be analyzed by multiple labs, and any anomalies should be recognized. States should then promptly act against labs that report inflated THC numbers knowingly.
  • It is important to eliminate the motivation for inflating THC potency. This can be achieved through various measures such as promoting transparency among labs, conducting regular audits by state regulatory bodies to detect any data inconsistencies or inaccuracies, and hiring expert data scientists by state agencies. This, in turn, will boost consumers’ confidence in the regulated market.
  • It is also essential to dispel the incorrect belief that higher THC levels are the only reliable indicator of potency. Raising awareness and promoting effective communication in this regard would help tackle the problem of THC potency, thereby reducing the occurrence of lab shopping.
  • Lastly, laboratories must obtain accreditation to ISO/IEC 17025 to demonstrate their proficiency in generating reliable results.

Image 1: A diagrammatical depiction of diverse strategies aimed at controlling the prevalence of THC inflation and lab shopping in the cannabis industry (Figure courtesy CloudLIMS)

cannabis testing

 

Why Cannabis Lab Testing Software?

A Laboratory Information Management System(LIMS), also known as cannabis lab testing software, can assist in fulfilling the ISO 17025 requirements effortlessly, which can increase the level of trust and assurance in the precision of the laboratory’s test results.

The adoption of cannabis lab testing software can be a game-changer. By integrating analytical instruments and ensuring strict adherence to quality standards, cannabis lab testing software automates processes, thus minimizing the potential for human error in test results. And that’s not all. With cannabis lab testing software, reports can be generated with a scannable QR code, which can be easily shared with customers in real-time. This QR code can also be configured to lead to the original CoA generated by the lab, allowing consumers to verify the composition of the product they’re purchasing. This not only fosters transparency but also instills greater trust and confidence in the products customers consume.

Cannabis lab testing software ensures the authenticity and reliability of laboratory data. The system leaves no room for manual manipulation. It tracks and records every laboratory activity with utmost precision, from staff login activities to changes in documents, sample records, and test results. Maintaining high-quality standards is paramount to establishing the credibility of results, and cannabis lab testing software helps accomplish it. It effectively manages QC sample results and identifies analytical errors by comparing them with the test samples, providing an added level of trust in the lab’s results.

Image 2: A cannabis lab testing software to record all laboratory activities with a date and time stamp (Figure courtesy CloudLIMS)

cloudlims

Conclusion

The legalization of cannabis in most US states has led to a rapidly expanding industry. However, the industry has faced obstacles, including the issue of THC inflation and biased selection of labs, which undermine the credibility of the regulated cannabis market. The absence of standardized testing procedures and the variation in methodologies and equipment used by different labs are gaps that corrupt labs have taken advantage of. To address these issues, it is crucial to establish a universal testing standard, enhance transparency among labs, and state agencies must conduct regular audits and recruit skilled data scientists. The adoption of a LIMS by cannabis testing labs can assist in fulfilling the ISO 17025 requirements, thus increasing the level of trust and assurance in the accuracy of the laboratory’s results. With the right measures in place, the cannabis industry can continue to grow and thrive, providing medical benefits and job opportunities, and generating more tax revenue.

About Astrix

Astrix is the unrivaled market leader in creating & delivering innovative strategies, technology solutions, and people to the life science community. Through world-class people, process, and technology, Astrix works with clients to fundamentally improve business, scientific, and medical outcomes and the quality of life everywhere. Founded by scientists to solve the unique challenges of the life science community, Astrix offers a growing array of fully integrated services designed to deliver value to clients across their organizations. To learn the latest about how Astrix is transforming the way science-based businesses succeed today, visit www.astrixinc.com.

About the Author

Montserrat Valdes is a highly skilled chemical engineer with a diverse background in research and industry. She holds a Master of Science degree in Chemical Engineering from the University of Saskatchewan and an Analytical Chemistry Diploma from the National Autonomous University of Mexico. Currently, Montserrat is working with CloudLIMS.com as a Scientist.

Montserrat is an experienced chemist with expertise in the analysis of cannabis and nicotine-containing products. As a QC Chemist and Analytical Chemist, she has conducted numerous accredited testing methods to ensure the quality and compliance of cannabis products. Her experience also includes validating analytical testing methods and operating, calibrating, and troubleshooting a variety of analytical instruments, including HPLC-FLD/DAD/VWD, IC, and LC-MS.
In addition to her work as a chemist, Montserrat has also served as a Research Engineer, successfully coordinating various environmental projects of great importance. These projects include the simultaneous capture of NH3 and H2S using nanoparticles, the biodegradation of surrogate naphthenic acids, and the adsorptive removal of antibiotics from livestock waste streams.

Montserrat has also made significant contributions to scientific literature through her research articles and conference presentations. Her publications in journals such as the Journal of Environmental Chemical Engineering and Bioprocess and Biosystems Engineering highlight her expertise in topics ranging from nanotechnology applications to biodegradation and wastewater treatment.

The post Breaking the Cycle: Effective Strategies for Cannabis Testing Labs and Regulators to Address THC Inflation and Lab Shopping appeared first on Astrix.

]]>
Astrix LIMS Tech Brief – SampleManager LIMS™ Barcode Navigation https://astrixinc.com/article/astrix-lims-tech-brief-samplemanager-lims-barcode-navigation/ Tue, 13 Jun 2023 21:55:09 +0000 https://astrixinc.com/?p=25655 Thermo Scientific™ SampleManager™ LIMS 21.1 Barcode Navigation allows users to enter data […]

The post Astrix LIMS Tech Brief – SampleManager LIMS™ Barcode Navigation appeared first on Astrix.

]]>
Thermo Scientific™ SampleManager™ LIMS 21.1 Barcode Navigation allows users to enter data and navigate through SampleManager with a barcode scanner. The following functions can be performed using Barcode navigation.  Access this new Tech Brief from Astrix.

The post Astrix LIMS Tech Brief – SampleManager LIMS™ Barcode Navigation appeared first on Astrix.

]]>
LIMS Tech Brief – SampleManager LIMS™ OData Service https://astrixinc.com/article/lims-tech-brief-samplemanager-lims-odata-service/ Tue, 13 Jun 2023 21:26:40 +0000 https://astrixinc.com/?p=25649 Thermo Scientific™ SampleManager™ LIMS 21.1 OData service allows external applications or web […]

The post LIMS Tech Brief – SampleManager LIMS™ OData Service appeared first on Astrix.

]]>
Thermo Scientific™ SampleManager™ LIMS 21.1 OData service allows external applications or web publishing services to obtain data from your SampleManager LIMS© instance

The post LIMS Tech Brief – SampleManager LIMS™ OData Service appeared first on Astrix.

]]>