Difference between revisions of "Intellectual rights on the Internet"

From ScenarioThinking
Jump to navigation Jump to search
Line 23: Line 23:
==Inhibitors:==
==Inhibitors:==


1. Outcome ethical issues: Is there a danger developing technologies that might perform similar (thinking) functions as the human brain?
1. Reluctance to change laws


2. Research ethical issues: Is it ethical to perform research and do experiments on the human brain and its functions?
2. Lack of exchanging information: This is the case not only between different countries but also between providers, police, etc. 


3. Lack of scope and focus: this new technology might create the next information society revolution, thus interest is high and widely spread over several industries. 
3. Conflicting laws: Privacy laws protect pirate's identety in some cases
 
4. Changes in technology make it impossible to keep up with security issues


==Paradigms:==
==Paradigms:==

Revision as of 14:55, 4 March 2007

Description:

The internet is an ideal way of sharing information of all types. This information can be distributed trough different means. Documents, pictures, and sounds or just some examples of different media that can be shared. When sharing information it is always important to take the ownership of this information into account. Sharing information without the owners permition is prohibited by law and is thus illegal.


The relative ease in which information can be shared through out the Internet has sparked a big uprise in information that is shared illegaly. The most know example is the sharing of music files (MP3's) through peer-to-peer programs without the artist's concent. By ripping music from a CD individuals can upload it to any other computer linked to the Internet that request that file.The increase of Internet bandwith, CPU speeds and hard disk space has resulted in an increase of shared movie files aswell.


Because of the difficulty to track down individuals who share this information without permition, it is very difficult to put a stop to this practices. Several publishers and artists in the music and movie industries have been trying to put an stop to these practices by various lawsuits geared towards individuals.

Enablers:

1. Technology: better computers, internet connections

2. Different copyright laws: Every country has a different approach to copyright, making Internet sharing more complex

3. Internet pirates: People who are willing to share illegal information

4. Lacking security: Information is easily 'stolen' or ripped

5. Difficulty to track pirates down

Inhibitors:

1. Reluctance to change laws

2. Lack of exchanging information: This is the case not only between different countries but also between providers, police, etc.

3. Conflicting laws: Privacy laws protect pirate's identety in some cases

4. Changes in technology make it impossible to keep up with security issues

Paradigms:

1. Simple tasks can already be learned today by artificial neural networks. Further investigation, in the power of those systems as well as in the power of the combination with conventional computer systems, will increase the power of a connected world or the internet.

2. ANNs will disappear as black boxes into our daily lives, supporting us with simple decision making where making a mistake is allowed (children's level). To increase the learning effect and for control purposes, these boxes will be interconnected via the internet.


Experts:

Prof. Dr. Hugo de GARIS,

Associate Professor,

Head, Brain Builder Group,

Computer Science Dept.,

Utah State University, USU,

Old Main 423, Logan,

Utah, UT 84322-4205, USA.

tel: + 1 435 797 0959

fax: + 1 435 797 3265

cell: +1 435 512 1826

degaris@cs.usu.edu

http://www.cs.usu.edu/~degaris


Timing:

1933: psychologist Edward Thorndike suggests that human learning consists in the strengthening of some (then unknown) property of neurons.

1943: first artificial neuron is produced (neurophysiologist Warren McCulloch & logician Walter Pits).

1949: psychologist Donald Hebb suggests that a strengthening of the connections between neurons in the brain accounts for learning.

1954: first computer simulations of small neural networks at MIT (Belmont Farley and Wesley Clark).

1958: Rosenblatt designs and develops the Perceptron, the first neuron with three layers.

1969: Minsky and Papert generalises the limitations of single layer Perceptrons to multilayered systems (e.g. the XOR function is not possible with a 2-layer Perceptron)

1972: A. Henry Klopf develops a basis for learning in artificial neurons based on a biological principle for neuronal learning called heterostasis.

1974: Paul Werbos develops the back-propagation learning method, the most well known and widely applied of the neural networks today.

1975: Fukushima (F. Kunihiko) develops a step wise trained multilayered neural network for interpretation of handwritten characters (Cognitron).

1986: David Rumelhart & James McClelland train a network of 920 artificial neurons to form the past tenses of English verbs (University of California at San Diego).

Web Resources:

1. http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html

2. http://www.inns.org/

3. http://www.nd.com/

4. http://www.dacs.dtic.mil/techs/neural/neural_ToC.html

5. http://www.ieee-nns.org/

6. http://www.economist.com/opinion/PrinterFriendly.cfm?Story_ID=1143317: The mind's eye

7. http://www.hirnforschung.net/cneuro/