Radial basis neural network optimization using fruit fly


Thèse de Master, 2014

92 Pages, Note: A


Extrait


CONTENTS

Acknowledgement

Declaration

Certificate from supervisor

Abstract

List of Figures

List of Tables

List of Abbreviations

Preface

Chapter 1 Introduction
1.1 Introduction
1.2 Motivation and Aim
1.3 Disposition

Chapter 2 Artificial Neural Network
2.1 Biological Neuron
2.2 Artificial Neural Network
2.2.1 Single Layer Perceptron
2.2.2 Multi Layer Perceptron
2.3 Artificial Neuron and Activation Function
2.3.1 Linear Activation Function
2.3.2 Non Linear Activation Functions
2.3.3 Radial Basis Net
2.4 Learning Methods
2.4.1 Supervised Learning
2.4.2 Unsupervised learning
2.5 Neural Network Applications
2.5.1 Process Control
2.5.2 Speech Recognition
2.5.3 Image Compression
2.5.4 Medical Imaging
2.5.5 Image Processing
2.5.6 Face Recognition
2.5.7 Road and Obstacle Recognition

Chapter 3 Neural Network Parallelism
3.1 General Aspects of Parallel Processing
3.2 Back Propagation Neural Network Parallelism
3.3 Neural Network Distribution
3.4 Distributed computing for BP parallelism
3.4.1 Training Set Parallelism
3.4.2 Node Parallelism
3.4.2.1 Neuron Parallelism
3.4.2.2 Synapse Parallelism
3.5 Need of Parallel Processing

Chapter 4 Distributed Computing System
4.1 Distributed System
4.2 Distributed Computing System Models
4.2.1 Minicomputer Model
4.2.2 Workstation Model
4.2.3 Workstation-server Model
4.2.4 Processor Pool Model
4.2.5 Hybrid Model
4.3 Distributed Computing Environment

Chapter 5 Fruit Fly
5.1 Fruit Fly
5.1.1 Fruit Fly Nervous System: Solution to Computer Problem
5.1.2 Fruit Fly takes Distributive Approach in Computing
5.2 Fruit Fly Revolutionize distributed computing
5.3 Fruit Fly Optimization

Chapter 6 Literature Survey
6.1 Literature Review

Chapter 7 Present Works
7.1 Overview
7.2 aFOA Algorithm
7.3 Radial Basis Neural Network
7.4 Variable Selection
7.5 Model Development
7.5.1 Data Preparation
7.5.2 Constructing Model

Chapter 8 Framework and Technology
8.1 MATLAB
8.2 MemBrain
8.3 System and Models
8.3.1 System Requirement
8.3.2 MemBrain Mathematical Model
8.3.3 Learning algorithms
8.4 MemBrain DLL
8.5 Automatic Code Generation

Chapter 9 Result and Discussion
9.1 Experimental results and analysis
9.2 Numerical Testing
9.3 Comparison

Chapter 10 Conclusion
10.1 Conclusion
10.2 Challenge and Future Work

References

Paper Publication

ABSTRACT

This research presents the optimization of radial basis function (RBF) neural network by means of aFOA and establishment of network model, adopting it with the combination of the evaluation of the mean impact value (MIV) to select variables. The form of amended fruit fly optimization algorithm (aFOA) is easy to learn and has the characteristics of quick convergence and not readily dropping into local optimum. The validity of model is tested by two actual examples, furthermore, it is simpler to learn, more stable and practical.

Our aim is to find a variable function based on such a large number of experimental data in many scientific experiments such as Near Infrared Spectral data and Atlas data. But this kind of function is often highly uncertain, nonlinear dynamic model. When we perform on the data regression analysis, this requires choosing appropriate independent variables to establish the independent variables on the dependent variables regression model. Generally, experiments often get more variables, some variables affecting the results may be smaller or no influence at all, even some variable acquisition need to pay a large cost. If drawing unimportant variables into model, we can reduce the precision of the model, but cannot reach the ideal result. At the same time, a large number of variables may also exist in multicollinearity. Therefore, the independent variable screening before modeling is very necessary. Because the fruit fly optimization algorithm has concise form, is easy to learn, and have fault tolerant ability, besides algorithm realizes time shorter, and the iterative optimization is difficult to fall into the local extreme value. And radiate basis function (RBF) neural network’s structure is simple, training concise and fasting speed of convergence by learning, can approximate any nonlinear function, having a "local perception field" reputation. For this reason, this paper puts forward a method of making use of the amended fruit flies optimization algorithm to optimize RBF neural network (aFOA-RBF algorithm) using for variable selection.

ANURAG RANA

Department of Computer Science and Engineering

Arni School of Technology

Arni University.

LIST OF FIGURES

FIG. No. FIG. TITLE

Figure 2.1 Basic Neuron

Figure 2.2 Typical Neuron

Figure 2.3 Neuron Model

Figure 2.4 Artificial Neural Network

Figure 2.5 Single Layer Neural Network

Figure 2.6 Multi Layer Neural Network

Figure 2.7 Basic Element of Linear Neuron

Figure 2.8 Linear Activation Function

Figure 2.9 Non Linear Activation Functions

Figure 2.10 Radial Basis Network

Figure 3.1 Processor Topologies for Simulating Ann’s

Figure 3.2 Kernel Portion of an entry node

Figure 3.3 Training of the English Alphabet

Figure 3.4 Synapse Parallelism

Figure 4.1 Workstation Model

Figure 4.2 Workstation Server Model

Figure 4.3 Processor Pool Model

Figure 4.4 DCE Based Distributed System

Figure 5.1 Drosophila Melanogaster

Figure 5.2 Male (Right) & Female Fruit

Figure 5.3 Food Finding Iterative Scrounging Process FOA

Figure 7.1 Radial Basis Neural Network

Figure 7.2 Variable Selection

Figure 8.1 Feed Forward Net

Figure 8.2 Net with Loopback Links

Figure 8.3 Snapshot of a chaotic net

Figure 9.1 aFOA RBF Iterative optimization process from data “Data1”

Figure 9.2 Flies optimal path for data “Data1”

Figure 9.3 aFOA RBF result for data “Data1”

Figure 9.4 aFOA RBF Iterative optimization process from data “Data2”

Figure 9.5 Flies optimal path for data “Data2”

Figure 9.6 aFOA RBF result for data “Data2”

LIST OF TABLES

TAB. No. TAB. TITLE

Table 9.1 Result of continuous operation 5 times about “Data1” by aFOA RBF

Table 9.2 Result of continuous operation 5 times about “Data2” by aFOA RBF

Table 9.3 Comparison of aFOA with GA

ABBREVIATIONS

illustration not visible in this excerpt

PREFACE

Neural networks features such as fast response, storage efficiency, fault tolerance and graceful degradation or specious inputs make appropriate tools for Intelligent Computer Systems. A neural network is an inherently parallel system where many, extremely simple, processing units work simultaneously in the same problem building up a computational device which possesses learning and generalization recognition abilities. Implementation of neural networks roughly involve at least three stages; design, training and testing. The second, being CPU intensive, is the one requiring most of the processing resources and depending on size and structure complexity the learning process can be extremely long. Thus, great effort has been done to develop parallel implementations intended for a reduction of learning time. With rapid increase of computer system performance during past decade, self selective parallel processing in distributed system becomes a crucial issue for computer system. Recently, a new fruit fly optimization algorithm (FOA) is proposed to solve stream-shop scheduling. In this work, we empirically study the performance of FOA. The optimization of radial basis function (RBF) neural network by means of aFOA and establishment of network model, adopting it with the combination of the evaluation of the mean impact value (MIV) to select variables. The form of amended fruit fly optimization algorithm (aFOA) is easy to learn and has the characteristics of quick convergence and not readily dropping into local optimum. The validity of model is tested by two actual examples, furthermore, it is simpler to learn, more stable and practical.Finally, on the basis of numerical testing results are provided, and the comparisons demonstrate the effectiveness.

Keywords: back-propagation, distributed system, fruit fly optimization algorithm, set-streaming stream-shop scheduling, task division, neighborhood-based search, global cooperation-based search, aFOA, RBF neural network, MIV.

Chapter 1 Introduction

1.1 Introduction

In practical applications of neural networks, fast response to external events within an extremely short time are highly demanded and expected. However, the extensively used gradient descent based learning algorithms obviously cannot satisfy real-time learning needs in many applications, especially large scale application and when higher learning accuracy and generalization performance are required. This developed neural network has parallel-distributed information processing structure that consists of a collection of simple processing elements, which are interlinked by means of signal channels or connections. Combining the strengths of parallel processing and the use of distributed computing, the neural network is able to enhance the processing times for both the learning and execution stages so as to efficiently calculate the most probable output with a remarkable degree of accuracy. As such, the objective of this thesis is to demonstrate the enhanced computational speed of a generalized large- scaled neural network with broad-based applications using parallel computing. The most useful property of neural network design is that it has the ability to recognize input it did not see before while maintaining the basic demands of its specific applications.

With the integration of parallel computing, the computational speed of the neural network can be further increased and as the computing power of personal computers increase with the times, the reality of achieving real-time computations of extremely large problems become almost a possibility. The practical implementations of such a powerful tool span across various zones are the most interests in engineering applications. From basic image recognition problems to time critical military installments in the form of target tracking and identification to more complex uses in strand recognition, there is a huge commercial potential for developing a system such as this.

1.2 Motivation and Aim

Neural networks are specified with many small processors working simultaneously on the same task. It has the ability to ’learn’ from training data and useits’knowledge’ to compare patterns in a data set. Fruit fly has evolved a method for arranging the tiny, hair-like structures it uses to feel and hear the world that's so efficient a team of scientists in Israel and at Carnegie Mellon

University says it could be used to more effectively deploy wireless sensor networks and other distributed computing applications. With a minimum of communication and without advance knowledge of how they are connected with each other

Our aim is to find a variable function based on such a large number of experimental data in many scientific experiments such as Near Infrared Spectral data [34, 37] and Atlas data [3, 32]. But this kind of function is often highly uncertain, nonlinear dynamic model. When we perform on the data regression analysis, this requires choosing appropriate independent variables to establish the independent variables on the dependent variables regression model. Generally, experiments often get more variables, some variables affecting the results may be smaller or no influence at all, even some variable acquisition need to pay a large cost. If drawing unimportant variables into model, we can reduce the precision of the model, but cannot reach the ideal result [38]. At the same time, a large number of variables may also exist in multicollinearity. Therefore, the independent variable screening before modeling is very necessary [14]. Because the fruit fly optimization algorithm has concise form, is easy to learn, and have fault tolerant ability, besides algorithm realizes time shorter, and the iterative optimization is difficult to fall into the local extreme value. And radiate basis function (RBF) neural network’s structure is simple, training concise and fasting speed of convergence by learning, can approximate any nonlinear function, having a "local perception field" reputation. For this reason, this research puts forward a method of making use of the amended fruit flies optimization algorithm to optimize RBF neural network (aFOA-RBF algorithm) using for variable selection.

1.3 Disposition

Chapter 2: Introduces the state art of artificial neural network and biological neuron.

Chapter 3:Introduces the parallelism of neural nets. NN distribution and distributed computing for BP parallelism.

Chapter 4:Introduces the distributed system model and environment.

Chapter 5:Introduces the fruit fly and fruit fly optimization algorithm.

Chapter 6: Discusses about the research and literature based on the fruit fly optimization algorithms.

Chapter 7: Discuss about the intended algorithm and system to solve the problem.

Chapter 8: Briefly explain the framework and technology that are used to implement and analyze the intended algorithm, testing and comparisons.

Chapter 9: Describe the intended system with experimental analysis, numerical testing and comparison.

Chapter 10: Conclusion and further enhancement of work

CHAPTER 2 Artificial Neural Networks

INTORDUCTION

This chapter describes Artificial Neural Networks in detail, it explain how artificial neural network will help to implement in parallel processing.

2.1 Biological Neuron

The elementary nerve cell, called a neuron, is the fundamental building block of the biological neural network. Its schematic diagram is shown in figure 2.1. Typical cell has three major regions: the cell body, which is also called the soma, the axon, and the dendrites, dendrites form a dendrites tree, which is a very fine bush of thin fibers around the neuron's body. Dendrites receive information from neurons through axons-long fibers that serve as transmission lines. An axon is a long cylindrical connection that carries impulses from the neuron. The end part of an axon splits into a fine arborization. Each branch of it terminates in a small end bulb almost touching the dendrites of neighboring neurons. The axon-dendrite contact organ is called synapse. The synapse is where the neuron introduces its signal to the neighboring neuron. The signal reaching a synapse and received by dendrites are electrical impulses. Inter neuronal transmission is sometimes electrical but is usually effected by the release of chemical transmitters at the synapse. Thus, terminal buttons generate the chemical that affects the receiving neuron. The receiving neuron either generates an impulse to its axon, or produces no response. The neuron is able to respond to the total of its inputs aggregated within a short time interval called the period of latent summation. The neuron’s response

illustration not visible in this excerpt

Figure 2.1: Biological Neuron [Source:www.neuralpower.com]

is generated if the total potential of its membrane reaches a certain level. The membrane can be considered as a shell, which aggregates the magnitude of the incoming signals over some duration. Specifically, the neuron generates a pulse response and sends it to its axon only if the conditions necessary for firing are fulfilled.

Much is still unknown about how the brain trains itself to process information (Figure 2.2), so theories abound. In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches.

illustration not visible in this excerpt

Figure 2.2: Typical Neuron

[Source:www.sparknotes.com/testprep/books/sat2/biology/chapter9section1.rhtml]

At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity from the axon into electrical effects that inhibit or excite activity in the connected neurons. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon. Learning occurs by changing the effectiveness of the synapses so that the influence of one neuron on another changes.

Dendrites are branching fibers that extend from the cell body or soma. Dendrites receive activation from other neurons.

Soma or cell body of a neuron contains the nucleus and other structures, support chemical processing and production of neurotransmitters. Soma processes the incoming activations and converts them into output activations.

Axon is a singular fiber carries information away from the soma to the synaptic sites of the outer neurons (dendrites and somas), muscles, or glands.

Axon hillock is the site of the summation for incoming information. At any moment, the collective influence of the all neurons that conduct impulses to a given neuron will determine whether or not an action potential will be initiated at the axon hillock and propagated along the axon.

Myelin Sheath consists of flat-containing cells that insulate the axon from electrical activity. This insulation acts to increase the rate of transmission of signals. A gap exists between each myelin sheath cell along the axon. Since fat inhibits the propagation of electricity, the signal jumps from one gap to the next.

Nodes of Ranvier are the gaps (about 1µm) between myelin sheath cells long axons are since fat servers as a good insulator, the myelin sheaths speed the rate of transmission of an electrical impulse along the axon.

Synapse is the point of connection between two neurons or a neuron and a muscle or a gland. Electrochemical communication between neurons takes place at these junctions. Synapses the junctions allow signal transmission between the axons and dendrites.

Terminal Buttons of a neuron are the small knobs at the end of an axon that release chemicals called neurotransmitters. The process of transmission is by diffusion of chemicals called neuro-transmitters.

The necessary conditions for the firing of a neuron, incoming impulses can be excitatory if they cause the firing, or inhibitory if they hinder the firing of the response. A more precise condition for firing is that the excitation should exceed the inhibition by the amount called the threshold of the neuron, typically a value of about 40V. Since a synaptic connection cause the excitatory or inhibitory reactions of the receiving neuron, it is practical to assign positive and negative unity weight values, respectively, to such connections. This allows to reformulating the neuron’s firing condition. The neuron fires when the total of the weights to receive impulses exceeds the threshold value during the latent summation period.

We conduct these neural networks by first trying to deduce the essential features of neurons and their interconnections. We then typically program a computer to simulate these features. However because our knowledge of neurons is incomplete and our computing power is limited, our models are necessarily gross idealizations of real networks of neurons (Figure 2.3).The incoming impulse to a neuron can only be generated by neighboring neurons and by the neuron itself. Usually, a certain number of incoming impulses are required to make a target cell fire. Impulses that are closely spaced in time and arrive synchronously are more likely to cause the neuron to fire. As mentioned before, observations have been made that biological networks perform temporal integration and summation of incoming signals. The resulting spatio-temporal processing performed by natural neural networks isa complex process and much less structured than digital computation.

illustration not visible in this excerpt

Figure 2.3: Neuron Model

[Source:www.doc.ic.ac.uk/~nd/surprise_96/journal/vol2/cs11/article2.artn.jpg]

The neural impulses are not synchronized in time as opposed to the synchronous discipline of digital computation. The characteristic feature of the biological neuron is that the signals generated do not differ significantly in magnitude; the signal in the nerve fiber is either absent or has the maximum value. In other words, information is transmitted between the nerve cells by means of binary signals.

After carrying a pulse, an axon fiber is in a state of complete non-excitability for ascertain time called the refractory period. For this time interval the nerve does not conduct any signals, regardless of the intensity of excitation. Thus, we may divide the time scale into consecutive interval, each equal to the length of the refractory period. This will enable a discrete-time description of the neurons performance in terms of their states at discrete time instance. Excitement neuron can be described by taking with example for the neurons that will fire at the instant k+1 based on the excitation conditions at the instant k. the neuron will be excited at the present instant if the number of excited excitatory synapses exceeds the number of excited inhibitory synapses at the previous instant by at least the number T, where T is the neuron’s threshold value.

2.2 Artificial Neural Network

Artificial neural network (ANN) is a discipline that draws its inspiration from the incredible problem-solving abilities of nature’s computing engine, the human brain, and endeavors to translate these abilities into computing machine which can then be used to tackle difficult problems in science and engineering. However, all Artificial neural network paradigms involve a learning phase in which the neural network is trained with asset of examples of a problem. For practical problems where the training data is large, training times of the order of days and weeks are not uncommon on serial machines. This has been the main stumbling block for artificial

illustration not visible in this excerpt

Figure 2.4: Artificial Neural Network

[Source:www.emeraldinsight.com/content_images]

neural network use in real-world applications and has also greatly impeded its wider acceptability. The problem of large training time can be overcome either by devising faster learning algorithms or by implementing the existing algorithms on parallel computing architectures.

2.2.1 Single Layer Perceptron

A single layer perceptron (SLP) is the simplest form of artificial neural network that can be built. It consists of one or more artificial neurons in parallel. Each neuron in the single layer provides one network output, and is usually connected to all of the external inputs. The diagram shown in figure 2.3 illustrates a very simple neural network; it consists of a single neuron in the output layer. There are n neurons in the input layer; each circle represents a neuron. The total input stimuli to the neuron in the output layer is

ZinAbbildung in dieser Leseprobe nicht enthalteniwi= x0w0 + x1w1 +. + xnwn.

Output of the neuron = ƒ (Zin), the input x is a special input, referred to as the bias input. Its value is normally fixed at 1. Its associated weight w is referred to as the bias weight. This approach to building a single layer perceptron encourages a greater understanding of the concepts relating to neural networks. The single layer perceptron, implements a form of supervised learning. Supervised neural networks are trained to produce desired outputs when specific inputs are used in the system. Supervised neural networks are particularly well suited for modeling and controlling dynamic systems, classifying noisy data, and predicting future events. In this case, building without the toolbox creates a less powerful but functioning SLP. When designing the SLP structure the weights are assigned small random values, input and target output patterns are also applied.

illustration not visible in this excerpt

Figure 2.5: Single Layer Neural Network

[Source: i.msdn.microsoft.com/hh975375.McCaffrey_Figure1_hires]

The output of the perceptron is calculated from the equation:

Abbildung in dieser Leseprobe nicht enthalten

2.2.2 Multi Layer Perceptron

A Multi layer perceptron (MLP) of networks consists of multiple layers of computational units, usually interconnected in a feed-forward way. Each neuron in one layer has directed connections to the neurons of the subsequent layer as shown in figure 2.6. In many applications the units of these networks apply a sigmoid function as an activation function. The ''universal approximation theorem'' for neural networks states that every continuous function that

maps intervals of real numbers to some output interval of real numbers can be approximated arbitrarily closely by a multi-layer perceptron with just one hidden layer. This result holds only for restricted classes of activation functions, e.g. for the sigmoid functions. Multi-layer networks use a variety of learning techniques, the most popular being ''back-propagation''. Here, the output values are compared with the correct answer to compute the value of some predefined error-function. By various techniques, the errors then fed back through the network. Using this information, the algorithm adjusts the weights of each connection in order to reduce the value of the error function by some small amount.

illustration not visible in this excerpt

Figure2.6: Multi Layer Neural Network

[Source: i.msdn.microsoft.com/hh975375.McCaffrey_Figure2_hires]

After repeating this process for a sufficiently large number of training cycles, the network will usually converge to some state where the error of the calculations is small. In this case, one would say that the network has ''learned'' a certain target function. To adjust weights properly, one applies a general method for non-linear Optimization (mathematics) that is called gradient descent. For this, the derivative of the error function with respect to the network weights is calculated, and the weights are then changed such that the error decreases (thus going downhill on the surface of the error function). For this reason, back-propagation can only be applied on networks with differentiable activation functions. In general, the problem of teaching a network to perform well, even on samples that were not used as training samples, is a quite subtle issue that requires additional techniques. This is especially important for cases where only very limited numbers of training samples are available. The training data and fails to capturing the true statistical process generating the data. Computational learning theory is concerned with training classifiers on a limited amount of data. In the context of neural networks a simple heuristic, called early stopping, often ensures that the network will generalize well to examples not in the training set. Other typical problems of the back-propagation algorithm are the speed of convergence and the possibility of ending up in a local minimum of the error function.

2.3 Artificial Neuron and Activation Function

The artificial neuron like the biological neuron described in figure 2.7 is processing element. An output for this artificial neuron is calculated by multiplying its inputs by a weight vector. The results are then added together and an activation function is applied to the sum. The activation function is a function used to transform the activation level of a unit or rather a neuron into an output signal. Typically, activation functions have a “squashing” effect; they contain the output within a range.

illustration not visible in this excerpt

Figure 2.7: Basic Elements of Linear Neuron

[Source: www.learnartificialneuralnetworks.com/images]

Neuron consists of three basic components – weights, thresholds, and a single activation function.

Weight Factors (w) The value w1, w2, w3, .,wnare weights to determine the strength of input vector x=[x1, x2, ., xn]T . Each input is multiplied by the associated weight of the neuron connection XTW. The +ve weight excites and the –ve weight inhibits the node output.

illustration not visible in this excerpt

Threshold (Φ) The node’s internal threshold (Φ) is the magnitude offset. It affects the activation of the node output y as:

illustration not visible in this excerpt

To generate the final output Y, the sum is passed on to a non-linear filter f called Activation Function or Transfer function or Squash function which releases the output Y.

Threshold for a Neuron In practice, neurons generally do not fire (produce an output) unless their total input goes above a threshold value. The total input for each neuron is the sum of the weighted inputs to the neuron minus its threshold value. This is then passed through the sigmoid function. The equation for the transition in a neuron is:

illustration not visible in this excerpt

2.3.1 Linear Activation Function

There are many activation functions that can be applied to neural networks; three main activation functions are dealt with in this thesis the first is the linear transform function, or purelin function. It is defined as follows:

illustration not visible in this excerpt

Neurons of this type are used as linear approximates.

illustration not visible in this excerpt

Figure 2.8: Linear Activation Function

[Source: www.saedsayad.com/images/ANN_Sigmoid]

2.3.2 Non Linear Activation Function

There are several types of non-linear activation functions; the two most common are the log-sigmoid transfer function and the tan-sigmoid transfer function. Plots of these differentiable, non-linear activation functions are illustrated in figure 2.9. They are commonly used in networks trained with back propagation. The networks referred to in this work are generally back propagation models and they mainly use log-sig and tan-sig activation functions. The logistic activation function; it is defined by the equation

[...]

Fin de l'extrait de 92 pages

Résumé des informations

Titre
Radial basis neural network optimization using fruit fly
Cours
Master Of Technology Computer Science and Engineering
Note
A
Auteur
Année
2014
Pages
92
N° de catalogue
V275287
ISBN (ebook)
9783656678717
ISBN (Livre)
9783656678724
Taille d'un fichier
1189 KB
Langue
anglais
Annotations
Mots clés
radial
Citation du texte
M. Tech. CSE Anurag Rana (Auteur), 2014, Radial basis neural network optimization using fruit fly, Munich, GRIN Verlag, https://www.grin.com/document/275287

Commentaires

  • Pas encore de commentaires.
Lire l'ebook
Titre: Radial basis neural network optimization using fruit fly



Télécharger textes

Votre devoir / mémoire:

- Publication en tant qu'eBook et livre
- Honoraires élevés sur les ventes
- Pour vous complètement gratuit - avec ISBN
- Cela dure que 5 minutes
- Chaque œuvre trouve des lecteurs

Devenir un auteur