Review Article Volume 8 Issue 2
1School of Technology, Universidade Estadual de Campinas (UNICAMP), Brazil
2Department of Computing & Software Engineering, Florida Gulf Coast University (FGCU), USA
3In.lab iNovaHC, Clinics Hospital, School of Medicine, University of São Paulo (USP), Brazil
Correspondence: Leandro Nunes de Castro, School of Technology, Universidade Estadual de Campinas (UNICAMP), Florida Gulf Coast University (FGCU), 10501 FGCU Blvd. South, Fort Myers, Florida, USA, Tel +12395901362
Received: December 10, 2023 | Published: December 26, 2023
Citation: Xavier R, de Castro LN. On the use of evolutionary and swarm intelligence algorithms in transfer learning approaches: a review. Int J Biosen Bioelectron. 2023;8(2):58-64. DOI: 10.15406/ijbsbe.2023.08.00235
Transfer Learning (TL) utilizes pre-trained models to solve similar problems. The knowledge from the original model is transferred to a new model during training, aiming to leverage previous knowledge in a new task. Natural Computing (NC) algorithms, such as Evolutionary Computation (EC) and Swarm Intelligence (SI), draw inspiration from nature, adapting more easily to new computational problems. This bio-inspired adaptation can enhance the performance of TL techniques, improving generalization and reducing computational costs. We investigate how evolutionary and swarm-intelligence algorithms are applied in TL, their contributions, the addressed problems, and the conducted experiments. We employ a systematic review following the PRISMA protocol, PICOS strategy, and START software to analyze primary studies.
Keywords: systematic review, algorithm, swarm intelligence, transfer learning, natural computing
TL, Transfer Learning; NC, Natural Computing; NIC, Nature-Inspired Computing; EC, Evolutionary Computation; SI, Swarm Intelligence; SR, Systematic Review
Transfer Learning (TL) is an approach in which a previously trained model shares its effective characteristics with other models to solve or optimize the solution to similar problems. The knowledge acquired by this pre-trained model, known as the source domain, which has demonstrated satisfactory performance in optimizing a prior task, called the source task, is transferred to a new model in training, called the target domain. This new model may be similar to or different from the previous one. The goal is to train the new model to solve a new task, called the target task, which may or may not be like the source task.1 Natural Computing (NC) studies and models natural processes, providing alternative solutions to problems not efficiently addressed by traditional techniques.
Nature-inspired algorithms, such as Evolutionary Computation (EC) and Swarm Intelligence (SI), illustrate how the behavior of living organisms can inspire technological advances.2 Given the inspiration from nature, NC algorithms can adapt to the environment and time. For example, Genetic Algorithms (GA) can undergo genetic variations when creating new populations, while Particle Swarm Optimization (PSO) adjusts its behavior based on the swarm to which it belongs.3 The adaptation of these algorithms, including their learning capability, is a relevant area of study in computer science.
Thus, it is possible to expect that applying the capabilities of nature-inspired algorithms can improve the performance of TL techniques, bringing greater generalization and even lower computational cost to the knowledge transfer iterations performed within them. Therefore, it is essential to understand how these bio- inspired algorithms are being applied in TL techniques, how they assist in this application, what problems (both in TL and NC) are being solved or optimized, and how experiments were conducted (including instances, benchmarks, and performance evaluation metrics) to compare the proposed algorithms with state-of-the-art algorithms.
This approach was conducted through a systematic review, a methodology that employs robust and reproducible protocols to address questions formulated by the authors. In this study, we adopted the PRISMA protocol,4 the PICOS strategy,5 and the START software6 to systematize the collection of works and analyze each of the primary studies.
The paper is organized as follows. Section 1 provides the context, motivation and a general overview of the work. Section 2 presents the main concepts explored in the paper, emphasizing transfer learning, evolutionary computing, and swarm intelligence. Section 3 presents the materials and methods used in the research, and Section 4 presents the results and discussions around them. The paper is concluded in Section 5 with a general overview of its contributions and the main open problems observed in the field.
In this review, certain concepts are discussed that need clarification for a better understanding of the content. Therefore, the following subsections present a general overview of nature-inspired algorithms and transfer learning
A briefing on nature-inspired computing
Natural Computing (NC) is a branch of computer science that seeks inspiration from nature to create innovative approaches in solving complex problems, exploring naturally intelligent systems. This interdisciplinary field combines computer science with disciplines such as chemistry, physics, and biology.2
Some authors2,4 classify the terminology of Natural Computing into three main categories:
1)Simulation of Natural Phenomena in Computing; 2) Nature-Inspired Computing (NIC), which develops techniques based on nature to solve problems; and 3) Computing with New Materials from Nature, using materials collected from nature to compute.7
Figure 1 illustrates the main areas and subareas of Natural Computing.
There is a trend in Natural Computing (NC) research, especially in the category of Nature-Inspired Computing (NIC), where the emphasis is given to Artificial Neural Networks (ANN). These networks play a significant role in emerging machine learning methodologies, such as Deep Learning,8 and in other technologies, such as Natural Language Processing (NLP).9 Therefore, investigations dedicated to exploring other categories of NC can identify gaps and introduce new paradigms to the field of study.
The systematic review conducted in this work provides an overview of NC algorithms that can assist in Transfer Learning techniques. The algorithms proposed in the base studies are mostly in the Nature-Inspired Computing (NIC) class, specifically in the categories of Evolutionary Computation (EC) and Swarm Intelligence (SI). In the following sections, we will explore the fundamental concepts of these algorithms.
Evolutionary Computing (EC)
During this review, it was observed that most NC algorithms used in TL techniques fall within the context of evolutionary computation. These algorithms, based on evolutionary biology, aim to adapt populations of individuals to the environment through reproduction, genetic variation, and selection.3 These techniques involve reproduction, random variation, and selection, fundamental processes of evolution.10 Evolution, as an optimization process, aims for adaptation to the environment, as highlighted by Mayr11 and related to computation by Baeck.10 The ability for robust adaptation, essential for computational systems, results in agile and precise performance, defining an algorithm as optimal.12
Among the algorithms observed in the primary studies, there are two subtypes of EC algorithms commonly used with TL strategies, namely Genetic Algorithms (GA) and Genetic Programming (GP). Each category of algorithms in evolutionary computation has distinct characteristics. GAs are based on search, often using either a binary or a real-valued representation of solutions, with a tendency toward representation problems due to their generality, replicating recombination and mutation processes across generations.13 On the other hand, GPs deal with representation problems, increasing the adaptability of structures,14 using hierarchies composed of terminal and primitive functions.15
Swarm Intelligence (SI)
Another Nature-Inspired Computing approach observed in the base studies is that of Swarm Intelligence (SI). SI occurs when a population of limited and usually non-intelligent entities exhibits swarm behavior, interacting intelligently with each other,16 as observed in flocks of birds, herds of quadrupeds, or schools of fish. This concept extends to various other 'populations', such as ant colonies, cars in traffic, or even people in a crowd, characterized by the collective behavior of the group.17
There are two lines of research in Swarm Intelligence:3 1) Studies based on insects or animals with social behaviors; and 2) Works that explore the ability of human society to process knowledge. Although the studied individuals are distinct, both depend on interaction within a population and with the surrounding environment, resulting in environmental impacts or interactions between individuals.
The SI approach present in the base studies is Particle Swarm Optimization (PSO), which simulates the social behavior of a swarm. In this type of algorithm, particles use their best positions and those of their neighbors to seek an optimal solution.2 Individuals learn from their own experiences and those of others, evaluating and imitating the behavior of the most successful. PSO is motivated by the behavior of fish schools and bird flocks, seeking a balance between global and local optimization through a self-organized and self-adjusting system.17
A briefing on transfer learning
Transfer Learning (TL) optimizes the application of knowledge across domains, potentially saving computational resources, improving training efficiency, promoting better generalization, and facilitating new task learning. This approach allows models to inherit knowledge, resulting in better performance, faster convergence, and greater adaptability to limited data. Some authors have interesting and comprehensive studies about TL.1,18,19 TL operates on the premise that knowledge obtained in one context can be applied in another, optimizing the learning process. This approach understands the idea that, just as humans build upon their existing knowledge when learning new skills or concepts, learning models should have the ability to do the same. The essence is to efficiently leverage acquired knowledge, training models to adapt and generalize to new tasks or domains.
The TL technique is characterized by adapting a model or system from a source domain, denoted as SD, with a corresponding source task, ST, to improve the performance of a target task, TT, in a target domain, TD, where SD ≠ TD and ST ≠ TT. Formally, TL can be represented as follows:
In this scenario, → represents the knowledge transfer process, whose main goal is to facilitate the learning of TT, capitalizing on insights and experiences gained from ST
Transfer Learning Strategies
Transfer Learning (TL) can be categorized into four configurations of transfer strategies:1
The other categorization of TL is based on the learning strategy and can be divided into three main concepts:19
This study aims to explore how evolutionary and swarm algorithms are used in Transfer Learning techniques, identifying the objectives of this usage and evaluating the performance of these approaches. Following the 2020 PRISMA Guidelines,4 the PICOS strategy5 and the START software,6 a systematic review protocol was developed to analyze primary studies, aiding in understanding the objectives of this paper. This review addresses the following research questions:
To achieve our goal, search strings consistent with the primary studies sought were created, along with the OR, NOT, AND operators, or their equivalents (-, +, =) applied to search engines. The used strings were:
The reason why we excluded the deep learning papers is because they open avenue for a completely different line of research and would result in a review that does not cover the use of evolutionary and swarm algorithms to improve transfer learning. Instead, a plethora of works involving the use of TL in deep learning approaches would be returned, but these are reviewed in other works from the literature.8,9
After searching and collecting data from the databases, some exclusion criteria were applied for better utilization of the primary studies. We used the Scopus, IEEE Xplore, and Web of Science databases with a publication date range from 2018 to 2023. Articles were excluded if they met the following exclusion criteria:
After primary extraction with the temporal criterion applied, 2,723 studies were retrieved (Table 1). Based on the inclusion and exclusion criteria, these works were reviewed (title, authorship, year, abstract, and keywords), and a total of 60 studies remained for full-text analysis. After reading and analyzing these texts, 10 were included in the systematic review.
Query |
IEEE |
Scopus |
Web of Science |
1 |
25 |
446 |
96 |
2 |
218 |
1.233 |
284 |
3 |
63 |
115 |
243 |
Table 1 Primary outcomes
The objective of this systematic review is to understand the use of natural computing algorithms for the benefit and improvement of TL techniques. To address all research questions in this study, the paper will be divided into three main topics and their subtopics.
Transfer learning, evolutionary and swarm algorithms
The first question formulated for this review was "which natural computing algorithms are being used to optimize transfer learning techniques?" To understand this question, it is necessary to observe how NC algorithms can assist in the performance of TL techniques.
Two categories of NC algorithms were identified among the analyzed primary studies, namely SI and EC, specifically PSO, GA, and GP. The algorithms proposed by the analyzed primary studies are applying nature-inspired algorithms to solve a variety of problems, both in optimization in NC and in optimizing TL techniques themselves. Table 2 shows the distribution and summarization of these studies.
Transfer Learning Technique |
Nature-Inspired Algorithm |
References |
Instance Weighting |
Genetic Algorithm |
Alghamdi et al.25 |
Genetic Programming |
Chen et al.26 |
|
Feature Selection-Based Domain Adaptation |
Particle Swarm Optimization |
Nguyen et al.27 |
Sanodiya et al.28 |
||
Castillo-García et al.29 |
||
Dhrif et al.30 |
||
Feature Transformation Based Domain Adaptation |
Genetic Programming |
Al-Helali et al.31 |
Al-Helal et al.32 |
||
Al-Helali et al.33 |
||
Parameter-Selection |
Particle Swarm Optimization |
Sanodiya et al.34 |
Table 2 Review summarization
The most common TL techniques among the primary studies collected for this review are "Instance Weighting," "Feature Selection-Based Domain Adaptation," "Feature Transformation-Based Domain Adaptation," and "Parameter Selection".
In Instance Weighting, three algorithms proposed in the base studies are Genetic Algorithm for Source Task Problems – GA(S),25 Genetic Algorithm for Target Task Problems – GA(T),25 and Instance Transfer Genetic Programming – ITGP.26 Instance Weighting, a statistical bias correction technique, addresses sample selection bias, playing a crucial role in transfer learning to overcome the disparity between source and target domains. It reassesses data from the source domain, discarding harmful instances and reusing relevant ones in the target domain, correcting the distribution difference between them.35
It is understood that TL involves two domains, the source and the target. When they share the same feature space, i.e., SD = TD, transfer learning is referred to as Domain Adaptation. Most feature-based domain adaptation strategies aim to create a latent feature space acting as a connection between diverse domains. In Feature Selection-Based Domain Adaptation, the expectation is that datasets derived from two original sets, when projected into the new feature space, exhibit greater proximity than the original sets.27
Among the base studies, five algorithms are identified: Semi-Supervised Particle Swarm Optimization (SemPSO),27 Unsupervised Particle Swarm Optimization (UnPSO),27 Feature Selection-Based Transfer Learning Approach Using Particle Swarm Optimization for Unsupervised Transfer Learning (FSUTL-PSO),28 Sticky Binary Particle Swarm Optimization (SBPSO), and Feature Selection Algorithm for Transfer Learning (COMB-PSO-TL).30
In addition to feature-based adaptation, there is Feature Transformation-Based Domain Adaptation, or Feature Weighting. In this TL technique, two approaches exist: symmetric and asymmetric.36 In the symmetric approach, feature transformation maps both domains to a third one representing the common feature space, created to unify both domains. In contrast, asymmetric approaches aim to map one domain into the other.31–33 Three algorithms are associated with this technique: Multi-Tree Genetic Programming-Based Transfer Learning (MTGPTL),31 Multi-Tree Genetic Programming Domain Adaptation (MTGPDA),32 and Multi-Tree Genetic Programming-Based Transfer Learning (MTGP-Based TL).33
As for the Parameter Selection technique, learning approaches transfer knowledge at the model/parameter level. For instance, when categorizing objects, knowledge of categories in the source domain can be transferred to the target domain using attributes such as shape and color.37 These parameters, reflecting the probabilistic distribution of features, are learned from the source domain to facilitate classifier training in the target domain, enabling knowledge transfer at parameter level.19 For the Parameter Selection technique, there is only one algorithm, the Particle Swarm Optimization-based Parameter Selection Approach for Unsupervised Discriminant Analysis (UDATL-PSO).34
Objectives of evolutionary and swarm algorithms in transfer learning techniques
The second question raised by this review pertains to understanding the objectives, that is, the problems being addressed within Transfer Learning techniques by the evolutionary and swarm algorithms. In the context of natural computing, these objectives are diverse, bringing forth interesting strategies to assist in potential challenges present in Transfer Learning techniques. Therefore, the following sections will describe which of these strategies were employed in the algorithms proposed in the foundational studies.
Evolutionary Computing Algorithms
Among the evolutionary computing algorithms discussed in the foundational studies, there are two approaches that use Genetic Algorithms (GA)25 to assist in Transfer Learning techniques, and four that use Genetic Programming (GP).26,31–33 Both GA(S) and GA(T)25 apply GA to model the TL process as a search problem, allowing them to control and measure various factors affecting transfer performance, such as problem difficulty, problem similarity, and information transfer methods. The goal of GA(S) and GA(T) is to understand the underlying mechanisms and principles of TL, aiming to develop more effective knowledge transfer techniques.
With Genetic Programming algorithms, we have Instance Transfer Genetic Programming (ITGP).26 ITGP seeks optimal weights for instances in the source domain, enabling the efficient reuse of informative instances from the source domain while mitigating the impact of harmful instances from the source domain. In summary, ITGP utilizes a new instance weighting framework to guide the evolutionary process through the evolution of weight vectors.
In the Multi-Tree Genetic Programming-Based Transfer Learning (MTGPTL)31 algorithm, Genetic Programming aims to perform a feature-based transformation on the source domain data. The transformed data is subsequently used to train a model. On the other hand, Multi-Tree GP Domain Adaptation (MTGPDA)32 aims to construct a transformation that maps the feature space of the source domain to the feature space of the target domain. The last proposed algorithm based on GP is Multitree Genetic Programming-Based Transfer Learning (MTGP-Based TL),33 which aims to construct a weighting that maps a source domain to a different but related target domain. This mapping is achieved using MGPT (Multitree Genetic Programming), constructing multiple features from the source domain's features so that the transformed data has a representation like that of the targetdomain. This results in more effective learning for the target domain, compensating for the lack of knowledge in the target domain due to the absence and scarcity of instances.
Swarm Intelligence Algorithms
In the realm of Swarm Intelligence (SI) algorithms, there are six representatives, all falling under the domain of Particle Swarm Optimization (PSO). In one of the basic studies,27 two algorithms, SemPSO and UnPSO, are introduced. Despite employing different transfer strategies (one using transductive strategy and the other using unsupervised strategy, as their names suggest), the common goal of utilizing PSO in both algorithms is to propose a new fitness function that allows the PSO-based feature selection algorithm to operate when class label information is available (semi-supervised) and when it is not available in the target domain (unsupervised). In other words, the task of PSO is to search for feature subsets with the lowest fitness value.
In the Feature Selection-Based Transfer Learning Approach Using Particle Swarm Optimization for Unsupervised Transfer Learning (FSUTL-PSO)28 algorithm, the nature-inspired algorithm aims to select a good subset of features between the source and target domains to eliminate the risk of degenerated feature transformation. PSO constructs a common feature space using features with high invariant characteristics. The Sticky Binary Particle Swarm Optimization (SBPSO)29 utilizes PSO as a strategy to achieve an effective representation of invariant features that preserves important information from the original features.
The Feature Selection Algorithm for Transfer Learning (COMB-PSO-TL)30 employs PSO to provide high discriminative capacity in both domains, minimize the difference between conditional distributions, and minimize the difference between marginal distributions in the source and target domains. In other words, it enhances classification accuracy in a target domain through knowledge transfer from a source domain. The algorithm conducts subset selection of features (genes) for transfer learning using a two-level combinatorial particle swarm optimization algorithm. Finally, the Particle Swarm Optimization-based Parameter Selection Approach for Unsupervised Discriminant Analysis (UDATL-PSO)34 uses PSO to select the best parameter values for knowledge transfer.
Experiments with the proposed methods and algorithms
The third question addressed in the research pertains to the problems and applications that the proposed methods aim to optimize. In other words, by the end of this section, we aim to comprehend the problems tackled by the proposed algorithms and how comparative experiments (benchmarks) were conducted. To achieve this, Table 3 summarizes the type of optimization, the problems, and the names of the proposed algorithms, the instances used, the benchmark algorithms presented, and the performance metrics applied for conducting these comparisons.
TL Problem |
NC Problem |
Proposed algorithm |
Performance measures |
Instances |
Bench markings |
Function Approximation |
Diversity Generations for Convergence Friedman Dataset |
Static Source Problem Fixed Target Problems |
Estimation Distance (ED) 30-Top, 30-Best 100-Top, and 100-Best |
||
Instance Weighting |
RSE Median |
UCI and Delve Datasets (Wine, Kin, Student, House, Abalone) |
GP-Tar GP-Comb TLGP |
||
Symbolic Regression |
ITGP26 |
Significance Wilcoxon Test Z-Test |
Friedman-1 Dataset Gas Sensor Dataset |
TLGP-NS SVR-Tar SVR-W |
|
Accuracy Significance Wilcoxon Test |
Handwritten Digits Datasets Objects Recognition Dataset |
TCA and STCA MIDA and SMIDA |
|||
Feature Selection- Based Domain Adaptation |
Classification |
FSUTL-PSO28 |
Accuracy Parameter Sensitivity Tests |
Office+Caltech Dataset PIE Face Recognition Dataset |
TCA, TSL, TJM, TDA-AL mSDA, MEDA RDALR, RTML GFK, GA-DA AS, UnPSO JDA, JGSA LTSL, ILS CMDL, CORAL, CDDA DGA-DA, BDA |
SBPSO29 |
Accuracy Precision Recall F1-Score Area Under the Curve (AUC) |
Gas Sensor Drift Dataset Handwitten Digits Dataset Prostate Dataset TripAdvisor Dataset Source Synthetic Dataset |
DAN TCA ATL
Simple PSO COMB-PSO-LS COMB-PSO-TF |
||
COMB-PSO- TL30 |
Accuracy Mean |
Target Synthetic Dataset UCI Datasets (Housing, |
|
||
Feature Transformation |
Symbolic Regression |
MTGPTL31 |
Standard Deviation RSE |
Concrete, Forestfires, Yacht- Hydrodynamics) |
NoTL PlainTL |
MTGPDA32 |
RSE |
UCI Datasets (Housing, Concrete, Yacht, Forestfires, Auto- MPG, Imports) |
WKNNIM WKNNIMDC RFIM MTGPTL |
||
|
|
MTGP-Based TL33 |
RSE |
UCI Datasets (Housing, Concrete, Yacht, Forestfires, Auto- MPG, Imports) |
Traditional TL Methods |
Parameter- Selection |
Classification |
UDATL-PSO34 |
Accuracy Time/Space Complexity |
PIE Face Recognition Caltech Office Datasets |
Non-TL Algorithms TL Algorithms |
Table 3 Experiments information & proposed algorithms and methods
These algorithms are proposed not only to optimize challenges encountered in Transfer Learning (TL) techniques but also to assist in broader optimization problems such as Function Approximation, Classification, and Symbolic Regression. These general optimization problems are present to conduct comparative tests between the proposed algorithms and benchmark algorithms—state-of-the-art algorithms that also engage in some form of optimization. The Function Approximation problem aims to find a function that best fits a set of observed data or represents a desired relationship between variables. In other words, the goal is to find a mathematical expression that is a good approximation for the real function relating to the variables of interest. The GA(S) and GA(T)25 algorithms aim to test TL techniques in frameworks based on Genetic Algorithms. In this experiment, the authors created an environment with source and target problems to test various estimations, such as distances and population numbers generated by the methods.
In the Classification problems, the objective is to assign a category or label to a given object based on its observed characteristics.38 Assigned to this problem are the algorithms SemPSO,27 UnPSO,27 FSUTL-PSO,28 SBPSO,29 and COMB-PSO-TL.30 These algorithms are applied to image and document classification using datasets such as Office+Caltech,39 PIE Face Recognition,40 Gas Sensor,41 Handwritten Digits,42,43 Prostate,44 and TripAdvisor.45 For Regression problems, the goal is predicting or estimating a continuous value based on observed characteristics.28 The primary studies analyzed involve Symbolic Regression, which aims to discover a symbolic expression capturing the underlying structure of the data, providing interpretability to the model. This is particularly useful for missing values, a problem explored by the three Genetic Programming-based algorithms (MTGPTL,31 MTGPDA,32 and MTGP-Based TL33).
In summary, the algorithms addressed in this study not only aim to optimize specific challenges encountered in TL techniques but also extend to assisting in broader optimization problems. By addressing these general problems, the algorithms undergo comparative tests, facing benchmark algorithms representing the state-of-the-art in optimization. Collectively, these approaches open valuable perspectives for advancements in both Transfer Learning and more comprehensive optimization problems.
This paper provided a comprehensive investigation into the crucial role of evolutionary and swarm algorithms in optimizing Transfer Learning techniques. By addressing three main questions, it was possible to explore not only the diversity of the algorithms applied in TL but also their specific objectives and the results of practical experiments. The importance of understanding these algorithms was highlighted, categorizing them into Swarm Intelligence (SI) and Evolutionary Computing (EC). The analysis revealed the significant presence of algorithms such as Particle Swarm Optimization (PSO), Genetic Algorithms (GA), and Genetic Programming (GP), showing the versatility of these approaches in overcoming the unique challenges encountered in TL. Innovative strategies employed by these algorithms to solve specific TL problems were explored. From searching for optimal weights for instances in the source domain to constructing feature-based transformations, the potential of these approaches to enhance the effectiveness of learning in TL became evident. The experiments conducted covered problems such as Function Approximation, Classification, and Symbolic Regression. The positive results of these experiments provided valuable insights into the practical applicability of these algorithms in real-world scenarios.
Considering these reflections, it becomes evident that natural computing algorithms play an essential role in the evolution of TL techniques. Their contributions not only optimize specific challenges in TL but also extend to broader optimization problems such as Function Approximation and Classification.
As future perspectives, further exploration of these techniques in more complex domains and adaptation to specific challenges in diverse areas is suggested. The continuous refinement of these algorithms is essential to tackle emerging challenges and maintain the relevance of these approaches at the forefront of artificial intelligence and machine learning.
This research is funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001, and by Grant #2021/11905-0, São Paulo Research Foundation (FAPESP).
The authors declare there is no conflict of interest.
©2023 Xavier, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.