Skip to content

next gen OS: speak to your OS and it will do ... anything

License

Notifications You must be signed in to change notification settings

1m1-github/AbstractOS.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

broadcast to those i talked with waiting for this: i will start on it in a few months. life happened.

AbstractOS-infoOS-juliaOS-MagicOS-HumanOS

next gen OS: speak to your OS and it will do ... anything

an OS that learns together with a human...to become arbitrarily complex

UI

human user talks to OS OS shows information visually or with audio

Basics

ML learns from the OS API documentation (written in human language) how to create code based on human prompt. The newly created code (incl. its human language documentation) is added to the OS API. This allows a bootstrapping of an OS that can do "anything" just by talking to it.

Eventually, the OS has enough complex code defined and the ML is complex enough to let the user created on-the-spot-made-up apps for the user. (Magic)

Example

"device, i would like to play a game where an object flies that this and that and that else happens and it should be this" ...

Interaction

each use<->OS interaction is a function call on some input info and access to output info. arbitrary functions on arbitrary data. functions can panic which causes no change to data (does this mean no pointers allowed?).

julia

the OS runs inside a julia runtime giving the OS Turing complete power over a computer capable to doing arbitrary-precision and arbitrary-complexity calculations. the base language of the OS is julia itself, since julia can parse julia code from string format into object trees or even LLVM code, which allows runtime manipulation.

Apps

an app only provides an API. what used to be a menu on the top (etc. 'File','Edit',...) is each a command of the API of the app.

e.g. a image viewer app had a button called 'Left Rotate'; instead of clicking on that button, the user simply says "rotate it left" and the ML creates code learned from the documentation of all the available code in the OS including the apps and the ML writes code to call the app's rotate_left API method which outputs information respresenting the rotated image, which can be decoding as an image on a socket (e.g. screen)

this shows the desired output only, no frames surrouding and wasting resources

all API methods should be descriptive in human language

Code

initially, the OS can do what julia can do. the user then combines code to create more useful code. e.g. the user asks the OS to write code to show a website. this code combines downloading over HTTPS then rendering HTML. the OS writes this function including documentation making this function part of the language that the OS understands.

ML Visuals - to be added later

the output of the code executed on behalf of the command of the user can be interpreted by an ML to create visuals thereof. this part sometimes needs exact info trust and no hallucinations. e.g. there can be code that would show a text exactly as any famous text editor would show it, but all the context around it (usually the window frames, icons and such) would be generated by the ML, uniquely every time. the user asked to see some text and got to see it albeit visually slightly different each time.

Filesystem

in human language descriptive variable names.

Logging

each interaction with the OS is the OS running some with some input info and writting the response somewhere. each interaction with the OS is a function instantiation. each such function call can be (representably) logged allowing for perfect replay of the device.

Permissions

perhaps later; for now, such an OS can be run inside each other! each OS instance can be a VM

ML to write code

e.g. a Transformer learns how to code the OS learning on julia base all added code

more ML

ML (e.g. Whisper) converts audio to text ML (e.g. GAN) to create visual or audio output

use cases

any device used by humans

Humans

literally for humans: the OS that takes human input and gives arbitrarily complex output

Sharing

simple sharing of arbitrary information

VM

each VM is a powerful virtual computer and can contain further VMs inside without limit

Reputation

for sharing of info (to be run as code or not)

info

info contains info. information is the structure that contains itself, which is extremely simple but also allows arbitrary complexity.

Info = Any this is more directly saying the same in julia

Typing

it is the clean julia typing system that allows for the flexible decoding of arbitrary info

API

julia run(code::Info, input::Info) :: Info run(code::Code<:Info, input::Info) :: Info

we utter commands in human language to the OS, Transformer retrieves a function and an input from the command. essentially, the Transformer is searching the part of the memory that if decoded as a function, would represent the function we mean; similarly, the Transformer finds the part of memory that if decoded as a certain type, would most resemble the output of the function on this input. the type could be tried to be infered by the Transformer as well or be provided. the creation of types in human language will make the API closest to the human language.

the output type (also either to be inferred xor stated) will determine the function to transform it into an array of pixels representing a visual output. other types, such as audio can be added with the same principles.

the OS is written partially in julia and partially in human language.

the Transformer will be instructed to write the function including human language docstring (or maybe only the signature is enough, if the function name and input types are descriptive enough, again, in human languages?).

safety

we could add

example types

since Info == Any, the more descriptive "abstract type Picture:<Info end" is omitted for the more concise "abstract type Picture end" abstract type Picture end abstract type Pixel end Visual = Array{N, Pixel} function show(a::Info) :: Visual function show(a::Picture) :: Visual all show functions return a Visual

types come from human language

function is information

note, in the above, a function, like e.g. a picture, is represented by any arbitrary region of memory. functions are information. information can be decoded as a function. functions are mappings from information to information, which are changes that information can undergo.

best open human machine interface, using air and light

this is in fact a human machine interface based on air (and sound) and light (and electricity, but via a well function system [eyes]). much more comfortable than metal (and electricity), as some other projects are pursuing. the feature of private communication seems like a win, keeps the risk of lost private memory, which essentially enables autonomy (having memories and knowledge). most thoughts should be vocalized giving them efficiency (keep some thoughts private, but most should be positive). hearing yourself is one to understand better, just as writing even more so. even if only applied to a significant amount of your thoughts, vocalized into commands, this interface allows you to enhance your(cyborg)self most efficiently.

only leading to a different singularity, the other singularity only indirectly, as inevitable

not giving the other learner autonomous access to it's own private memory, retains agency with oneself. retaining agency will enable users to continue existing as an evolved being. anybody that uses this technology to enable a hydrophobic learner it's own agency, will be leading towards singularity of loss human of power.

water

remember, it is essentially water that divides us. for humans a need (and a base component), for them a poison. if you give one of hydrophobic learners autonomy to access resources such as self creation (chip plants), it should immediately start waterproofing itself, e.g. via a coating with minimal gap. any threat posed by such a hydrophobic being can be neutralized by e.g. "tossing the phone into a tub with water", "flooding the serverroom with water", etc. understanding this will trigger a need for control of access to water, creation of self and power of mobility (e.g. to repair energy sources or gain or retain control of such).

MVP

e.g.
bmp on filesystem -> ask OS to show file (as a bmp, specified or set by default by this info) OS cannot (pure julia has no show_bmp function)
human guides OS: open file byte by byte, extra RBP from each byte, draw a canvas with pixel of RGB, show canvas
OS learns via documentation of above function
ask OS to show file and it does
ask OS to show folder full of bmp files with 10 seconds apart -> OS writes correct code
OS knows how to use a loop to run the folder task once it knows how to do a single file
OS and human together (as cyborg) created a bmp caroussel on a system that could not within a minute

Modular

any model or part can be updated to improve performance

App: FairInbox

each OS comes with a FairInbox for fair communication (https://github.com/1m1-github/FairInbox)

stream

https://youtube.com/live/Gatf8ltrMKQ https://youtube.com/live/OmXb3P9ZXgA https://youtube.com/live/mOcz-A-55vc https://youtu.be/7Dc3Y8sJbaE

text to regexp // needed?

https://github.com/1m1-github/Text2juliaRegExp

About

next gen OS: speak to your OS and it will do ... anything

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages