Show simple item record

dc.contributor.authorLyon, C.
dc.contributor.authorFrank, R.
dc.date.accessioned2007-07-25T15:27:24Z
dc.date.available2007-07-25T15:27:24Z
dc.date.issued1997
dc.identifier.citationLyon , C & Frank , R 1997 , ' Using single layer networks for discrete, sequential data: an example from natural language processing ' , Neural Computing and Applications , vol. 5 , no. 4 , pp. 196-214 . https://doi.org/10.1007/BF01424225
dc.identifier.issn0941-0643
dc.identifier.otherPURE: 92535
dc.identifier.otherPURE UUID: 2cef0b47-a64e-4cc1-9420-f6f80279bc10
dc.identifier.otherdspace: 2299/278
dc.identifier.otherScopus: 27144436473
dc.identifier.urihttp://hdl.handle.net/2299/278
dc.descriptionThe original publication is available at www.springerlink.com Copyright Springer DOI : 10.1007/BF01424225
dc.description.abstractNatural Language Processing (NLP) is concerned with processing ordinary, unrestricted text. This work takes a new approach to a traditional NLP task, using neural computing methods. A parser which has been successfully implemented is described. It is a hybrid system, in which neural processors operate within a rule based framework. The neural processing components belong to the class of Generalized Single Layer Networks (GSLN). In general, supervised, feed-forward networks need more than one layer to process data. However, in some cases data can be pre-processed with a non-linear transformation, and then presented in a linearly separable form for subsequent processing by a single layer net. Such networks offer advantages of functional transparency and operational speed. For our parser, the initial stage of processing maps linguistic data onto a higher order representation, which can then be analysed by a single layer network. This transformation is supported by information theoretic analysis. Three different algorithms for the neural component were investigated. Single layer nets can be trained by finding weight adjustments based on (a) factors proportional to the input, as in the Perceptron, (b) factors proportional to the existing weights, and (c) an error minimization method. In our experiments generalization ability varies little; method (b) is used for a prototype parser. This is available via telnet.en
dc.language.isoeng
dc.relation.ispartofNeural Computing and Applications
dc.titleUsing single layer networks for discrete, sequential data: an example from natural language processingen
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionScience & Technology Research Institute
dc.description.statusPeer reviewed
rioxxterms.versionofrecordhttps://doi.org/10.1007/BF01424225
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record