Skip to content

monicaguduru/Multi-Layer-Perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Multi-Layer-Perceptron

AIM: Designing a Multi-Layer perceptron.

LANGUAGE USED: C

DATA STRUCTURES USED: Arrays FUNCTIONS DEFINED: readTraining()- to read training set and store it in inputs[ ][ ]. readTest()-to read test data set and store it in testInput[ ][ ]. generateRandom()- to generate random weights (inner to hidden,hidden to outer) and store it in innerW[ ][ ] and outerW[ ][ ]. train()-loops through all the inputs and stops the computation according to stopping criteria calculateOutput()-generates Zk values and stores it in finalAnswer[ ]. updateError()- calculates error values . updateWeights()- distributes errors and updates all weights accordingly. test()- classifies the test inputs according to the weights generated by training set. classifyClass()-finds maximum of all outputs generated and returns the class in which it belongs to. ALGORITHM: FOR UPDATION OF WEIGHTS: ->scan the training set ->calculate output(Zk) values

->using these output values , calculate error i.e., (Tk-Zk) ->distribute errors and update weights. ->from outer to hidden ->from hidden to input

FOR TEST INPUT: ->pass the test input into the gk() to get output values. ->using these output values find the max among them and classify them. CONCLUSION: Cross entropy loss function is better than mean square error. Conclusion drawn by comparing the accuracies of both the functions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages