Ann applications in solar energy Systems

Artificial neural networks have been used by the author in the field of solar energy, for modeling the heat-up response of a solar steam generating plant (Kalogirou et al., 1998), the estimation of a parabolic trough collector intercept factor (Kalogirou et al., 1996), the estimation of a parabolic trough collector local concentration ratio (Kalogirou, 1996a), the design of a solar steam generation system (Kalogirou, 1996b), the performance prediction of a thermosiphon solar water heater (Kalogirou et al., 1999a), modeling solar domestic water heating systems (Kalogirou et al., 1999b), the long-term performance prediction of forced circulation solar domestic water heating systems (Kalogirou, 2000), and the thermosiphon solar domestic water heating system's long-term performance prediction (Kalogirou and Panteliou, 2000). A review of these models, together with other applications in the field of renewable energy, is given in an article by Kalogirou (2001). In most of those models, the multiple hidden layer architecture shown in Figure 11.19 was used. The errors reported are well within acceptable limits, which clearly suggests that artificial neural networks can be used for modeling and prediction in other fields of solar energy engineering. What is required is to have a set of data (preferably experimental) representing the past history of a system so that a suitable neural network can be trained to learn the dependence of expected output on the input parameters.

11.6.2 Genetic Algorithms

The genetic algorithm (GA) is a model of machine learning that derives its behavior from a representation of the processes of evolution in nature. This is done by the creation, within a machine or computer, of a population of individuals represented by chromosomes. Essentially, these are a set of character strings that are analogous to the chromosomes in the DNA of human beings. The individuals in the population then go through a process of evolution.

It should be noted that evolution as occurring in nature or elsewhere is not a purposive or directed process, i.e., no evidence supports the assertion that the goal of evolution is to produce humankind. Indeed, the processes of nature seem to end with different individuals competing for resources in the environment. Some are better than others; those that are better are more likely to survive and propagate their genetic material.

In nature, the encoding for the genetic information is done in a way that admits asexual reproduction and typically results in offspring that are genetically identical to the parent. Sexual reproduction allows the creation of genetically radically different offspring that are still of the same general species.

In an oversimplified consideration, at the molecular level, what happens is that a pair of chromosomes bump into one another, exchange chunks of genetic information, and drift apart. This is the recombination operation, which in GAs is generally referred to as crossover because of the way that genetic material crosses over from one chromosome to another.

The crossover operation happens in an environment where the selection of who gets to mate is a function of the fitness of the individual, i.e., how good the individual is at competing in its environment. Some GAs use a simple function of the fitness measure to select individuals (probabilistically) to undergo genetic operations, such as crossover or asexual reproduction, i.e., the propagation of genetic material remains unaltered. This is a fitness proportionate selection. Other implementations use a model in which certain randomly selected individuals in a subgroup compete and the fittest is selected. This is called tournament selection. The two processes that most contribute to evolution are crossover and fitness-based selection/reproduction. Mutation also plays a role in this process.

GAs are used in a number of application areas. An example of this would be multidimensional optimization problems, in which the character string of the chromosome can be used to encode the values for the different parameters being optimized.

Therefore, in practice, this genetic model of computation can be implemented by having arrays of bits or characters to represent the chromosomes. Simple bit manipulation operations allow the implementation of crossover, mutation, and other operations.

When the GA is executed, it is usually done in a manner that involves the following cycle. Evaluate the fitness of all of the individuals in the population. Create a new population by performing operations such as crossover, fitness-proportionate reproduction, and mutation on the individuals whose fitness has just been measured. Discard the old population and iterate using the new population. One iteration of this loop is referred to as a generation. The structure of the standard genetic algorithm is shown in Figure 11.21 (Zalzala and Fleming, 1997).

With reference to Figure 11.21, in each generation, individuals are selected for reproduction according to their performance with respect to the fitness function. In essence, selection gives a higher chance of survival to better individuals.

Genetic algorithm

t = 0 [start with an initial time]

Initialize population, P(t) [initialize a usually random population of individuals] Evaluate fitness of population P(t) [evaluate fitness of all individuals in population]

While (Generations < Total number) do begin (2) t = t + 1 [increase the time counter]

Select Population P(t) out of Population P(t-1) [select sub-population for offspring production]

Apply crossover on population P(t) Apply mutation on population P(t)

Evaluate fitness of population P(t) [evaluate new fitness of population]

FIGURE 11.21 The structure of a standard genetic algorithm.

Subsequently, genetic operations are applied to form new and possibly better offspring. The algorithm is terminated either after a certain number of generations or when the optimal solution has been found. More details on genetic algorithms can be found in Goldberg (1989), Davis (1991), and Michalewicz (1996).

The first generation (generation 0) of this process operates on a population of randomly generated individuals. From there on, the genetic operations, in concert with the fitness measure, operate to improve the population.

During each step in the reproduction process, the individuals in the current generation are evaluated by a fitness function value, which is a measure of how well the individual solves the problem. Then, each individual is reproduced in proportion to its fitness. The higher the fitness, the higher is its chance to participate in mating (crossover) and produce an offspring. A small number of newborn offspring undergo the action of the mutation operator. After many generations, only those individuals who have the best genetics (from the point of view of the fitness function) survive. The individuals that emerge from this "survival of the fittest" process are the ones that represent the optimal solution to the problem specified by the fitness function and the constraints.

Genetic algorithms are suitable for finding the optimum solution in problems were a fitness function is present. Genetic algorithms use a "fitness" measure to determine which individuals in the population survive and reproduce. Thus, survival of the fittest causes good solutions to progress. A genetic algorithm works by selective breeding of a population of "individuals," each of which could be a potential solution to the problem. The genetic algorithm seeks to breed an individual that either maximizes, minimizes, or is focused on a particular solution to a problem.

The larger the breeding pool size, the greater the potential for producing a better individual. However, since the fitness value produced by every individual must be compared with all other fitness values of all other individuals on every reproductive cycle, larger breeding pools take longer time. After testing all the individuals in the pool, a new "generation" of individuals is produced for testing.

During the setting up of the GA, the user has to specify the adjustable chromosomes, i.e., the parameters that would be modified during evolution to obtain the maximum value of the fitness function. Additionally, the user has to specify the ranges of these values, called constraints.

A genetic algorithm is not gradient based and uses an implicitly parallel sampling of the solutions space. The population approach and multiple sampling means that it is less subject to becoming trapped in local minima than traditional direct approaches and can navigate a large solution space with a highly efficient number of samples. Although not guaranteed to provide the globally optimum solution, GAs have been shown to be highly efficient at reaching a very near optimum solution in a computationally efficient manner.

The genetic algorithm is usually stopped after best fitness remains unchanged for a number of generations or when the optimum solution is reached.

An example of using GAs in this book is given in Chapter 3, Example 3.2, where the two glass temperatures are varied to get the same QJAc value from Eqs. (3.15), (3.17), and (3.22). In this case, the values of Tgj and Tg2 are the adjustable chromosomes and the fitness function is the sum of the absolute difference between each Qt/Ac value from the mean Q/Ac value (obtained from the aforementioned three equations). In this problem, the fitness function should be 0, so all Qt/Ac values are equal, which is the objective. Other applications of GAs in solar energy are given in the next section.

Solar Power

Solar Power

Start Saving On Your Electricity Bills Using The Power of the Sun And Other Natural Resources!

Get My Free Ebook


Post a comment