A possible future for MorphOS
  • Order of the Butterfly
    Order of the Butterfly
    Posts: 156 from 2004/11/18
    Price First of all. When you see entry level PC which are slower now than 5years ago it's not serious but it's market reality. Awsome power is not needed anymore for most of people. Suffisent power is the word.
  • »31.03.16 - 11:50
    Profile
  • Acolyte of the Butterfly
    Acolyte of the Butterfly
    deka
    Posts: 131 from 2013/2/12
    From: Hungary, Kecsk...
    > Suffisent power is the word

    And the sufficient power is more, than enough for the most apps on Mos currently.

    @amigabeliever:
    I think, you should collect the pros/cons of your new hardware. I started, but didn't want to type too much.
    There are more serious drawbacks there, than the received advantages.
  • »31.03.16 - 12:56
    Profile
  • Order of the Butterfly
    Order of the Butterfly
    asrael22
    Posts: 404 from 2014/6/11
    From: Germany
    Quote:

    deka wrote:
    >On the contrary it is extremely likely for the following reason: power efficiency.

    Who seriously cares about this?
    The user experience doesn't depend closely, what is inside the computer case/notebook. A well built machine works well, and a less quality product will give less good user experience. I think, the architecture doesn't matter from the users point of view. What matters is a nice ergonomy, good keys, screen, etc...


    I don't know about power efficiency.
    But saving power and with that CO2 (as long as we don't manage to retrieve our electrical power from regenerative energies) is more important than ever.


    Manfred
  • »31.03.16 - 14:25
    Profile
  • Acolyte of the Butterfly
    Acolyte of the Butterfly
    deka
    Posts: 131 from 2013/2/12
    From: Hungary, Kecsk...
    >But saving power and with that CO2 (as long as we don't manage to retrieve our electrical power from regenerative energies) is more important than ever.

    Sounds a bit tub-thumper...
    You can't say, that a computer is always better than the other if it's processor needs less power.
  • »31.03.16 - 15:31
    Profile
  • Order of the Butterfly
    Order of the Butterfly
    asrael22
    Posts: 404 from 2014/6/11
    From: Germany
    Quote:

    deka wrote:
    >But saving power and with that CO2 (as long as we don't manage to retrieve our electrical power from regenerative energies) is more important than ever.

    Sounds a bit tub-thumper...
    You can't say, that a computer is always better than the other if it's processor needs less power.


    No, I can't. But that is something the consumer and the industry should demand.


    Manfred
  • »31.03.16 - 16:11
    Profile
  • Order of the Butterfly
    Order of the Butterfly
    Posts: 156 from 2004/11/18
    In fact a g4 is near to be suffisent . G5 2.5 is awsome on morphos. I think that using the GPU to some opérations like video demuxing or encoding or for raytracing should be a great step. Which can handle 1080p on PowerBook. The problem is that GPU can not be used for now because WE don't have a library for that. It's à big work to do this but it could be impressive ! Gpus outperforms processors for complex calculations.
  • »31.03.16 - 17:58
    Profile
  • Paladin of the Pegasos
    Paladin of the Pegasos
    Intuition
    Posts: 1078 from 2013/5/24
    From: Nederland
    Quote:

    asrael22 wrote:
    Quote:

    deka wrote:
    >But saving power and with that CO2 (as long as we don't manage to retrieve our electrical power from regenerative energies) is more important than ever.

    Sounds a bit tub-thumper...
    You can't say, that a computer is always better than the other if it's processor needs less power.


    No, I can't. But that is something the consumer and the industry should demand.


    Manfred


    /Eagerly awaiting the comments from our Texan friends. ;)
    1.67GHz 15" PowerBook G4, 1GB RAM, 128MB Radeon 9700M Pro, 64GB SSD, MorphOS 3.9

    2.7GHz DP G5, 4GB RAM, 512MB Radeon X1950 Pro, OSX 10.5.8, 500GB SSHD, MorphOS 3.9
  • »31.03.16 - 18:26
    Profile
  • Paladin of the Pegasos
    Paladin of the Pegasos
    Zylesea
    Posts: 1977 from 2003/6/4
    Quote:

    deka schrieb:
    >On the contrary it is extremely likely for the following reason: power efficiency.

    Who seriously cares about this?


    I do.
    Actually I don't get a G5 myself because it's a electricity hog.
    But on the other hand I don't see MIPS is delivering something outstanding there.
    Modern x64 systems are balancing the ratio between energy uptake, cpu power, price and availability quite well.
    --
    http://www.via-altera.de

    Whenever you're sad just remember the world is 4.543 billion years old and you somehow managed to exist at the same time as David Bowie.
    ...and Matthias , my friend - RIP
  • »31.03.16 - 20:05
    Profile Visit Website
  • Yokemate of Keyboards
    Yokemate of Keyboards
    amigadave
    Posts: 2736 from 2006/3/21
    From: Lake Arrowhead...
    Quote:

    acepeg wrote:
    In fact a g4 is near to be suffisent . G5 2.5 is awsome on morphos. I think that using the GPU to some opérations like video demuxing or encoding or for raytracing should be a great step. Which can handle 1080p on PowerBook. The problem is that GPU can not be used for now because WE don't have a library for that. It's à big work to do this but it could be impressive ! Gpus outperforms processors for complex calculations.



    It would be great to find some wizard of a programmer OUTSIDE of the existing MorphOS Dev. Team members, who could work on giving us better 2D & 3D video card drivers for the existing video cards that are supported, while the rest of the MorphOS Dev. Team works on the port to x64 hardware. At this point in time, I don't think it would be productive to take away development time from the team to work on optimizing video card drivers further, but having better drivers that could take more advantage of the power within our existing GPU's, would be awesome for us users while we wait for x64 version of MorphOS to arrive (probably still years away).

    I'd contribute to a bounty if such a third party developer could be found.
    MorphOS - The best Next Gen Amiga choice.
  • »31.03.16 - 21:14
    Profile
  • Moderator
    Kronos
    Posts: 1985 from 2003/2/24
    @amigadave

    Offloading stuff to the GPU means providing something like OpenCL which would also largely benefit a potential x86 port.

    How much code could be shared between the different generations of ATI chips is offcourse a different question, but IMO if something like that would be started it would make perfect sense to start on R300/400 based cards that we allready use.
    --------------------- May the 4th be with you ------------------
    Mother Russia dance of the Zar, don't you know how lucky you are
  • »31.03.16 - 21:54
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    >>> I don't know if anyone is even developing a MIPs server chip,
    >>> never mind a competitive one.

    >> the market needs a server processor which is power efficient

    > Where is it? Which current MIPS core is suited for that?

    > The current Octeon includes 1 to 48 cores of the type i6400

    The Octeon III uses its own microarchitecture (cnMIPS(64) III, which implements MIPS64r5 ISA), not any from Imagination Technologies (I6400 implements MIPS64r6 ISA).

    > they only sell them as network server processors but they could be used
    > for other types of servers

    "Could be used for" doesn't imply "suited for" or "competitive at".

    > Now that Imagination has a 64 bit p series processor [...] it is only a matter
    > of time before they make processors of with 1 to 48 cores or more of the type
    > p6600 this will be the beginning of the return of the MIPS in the server.

    You seem to have missed that in 2014 (project announced as early as 2012), Cavium switched to ARMv8-A (AArch64) ISA for the Octeon III successor named ThunderX with 8 to 48 Thunder cores:

    https://morph.zone/modules/newbb_plus/viewtopic.php?forum=3&topic_id=7675&start=300
    https://morph.zone/modules/newbb_plus/viewtopic.php?forum=3&topic_id=7675&start=566
    https://morph.zone/modules/newbb_plus/viewtopic.php?forum=3&topic_id=7675&start=636

    There's no indication Cavium will develop a new MIPS core or use any of Imagination's MIPS cores, quite to the contrary:

    http://www.cavium.com/newsevents-Cavium-Unveils-OCTEON-TX.html
    http://www.cavium.com/newsevents-Cavium-Announces-ThunderX2.html

    >> Due to its single CPU core, the SoC wouldn't be popular with anything other than
    >> single-core operating systems such as Amiga-like OS. I doubt IP licensors would
    >> go without fixed licensing fee in this case.

    > Most SoC are only used by a single operating system, I do not see where the
    > difference is.

    The difference between a single-core operating system and a single (SMP-capable) operating system is that the former can only use one single core of a multicore CPU while the latter can use more than one core concurrently.

    > when everything is hardware accelerated, there is much less uses for several cores.

    You can't hardware-accelerate everything. There will always be problems that are best solved by code running on the general-purpose CPU core(s) because it's impossible to build special accelerators for every problem out there into a single chip (or on a single board, or into a single system), especially ones to be solved by desktop operating systems.

    > as a last ressort measure, which would increase the cost per unit even more
    > but allow a fee per unit from licensor (to lower the upfront cost), it is
    > always possible to include 2 or more cores in the chip and destroy the extra
    > ones before including the chips on the board for the first production run.

    Huh? Why would you do that? MorphOS already runs fine on multicore machines by simply ignoring all but one of the cores. And on modern SoCs, unused cores can be disabled so that they consume (almost) no power. There's no need to destroy anything.

    >> The chip you described (multi-GHz, GPU, hardware overlay, SATA, USB3, GbE,
    >> IP hardware offloading, Wi-Fi, Bluetooth, NOR flash, DCT/IDCT/FFT/IFFT,
    >> layer-4 protocol checksum offloading, IR decoding, GDDR3-SGRAM and RLDRAM3
    >> controllers) would be a very complex SoC. Three or less engineers developing
    >> this SoC in their spare time would need something like a decade to get it
    >> ready for the market.

    > the work they would need to do would be integration [...].

    Yes, I know. This is what I assess would take the mentioned timeframe.

    >> In conclusion, I still think you're a dreamer.

    > All worthwile projects started as a dream.

    ...with a basis in reality :-P

    > as an absolute last resort, using an existing SoC [...] and adding the missing
    > functionnality as external circuits can be workable.

    Words of reason, finally :-)

    > destroying the unnecessary sub-units

    Why destroy and not simply disable (see above)?


    Edit: added link to Cavium's Octeon TX and ThunderX2 press releases

    [ Edited by Andreas_Wolf 01.06.2016 - 19:08 ]
  • »04.04.16 - 17:14
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    > None of MorphOS developers even hesitated to post any reply on it

    ;-)
  • »04.04.16 - 20:16
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    > the involved registered company, in this case, the MorphOS team

    Except that the MorphOS team is not a registered company.

    > the MorphOS team can partner with another Amiga company which does hardware,
    > such as, for example, Acube.

    ACube doesn't develop its own SoCs either.

    > the Warrior platform (released 2014 to market)

    2013 (P5600).

    https://morph.zone/modules/newbb_plus/viewtopic.php?forum=3&topic_id=11623&start=27

    > An 14nm process p6600 or i6400 will beat the power efficiency of a
    > 14nm process ARM or X86 processor

    I6400 and P6600 are at 28 nm (from 2011). As soon as they (or rather their successors) are at 14 nm (from 2014), ARM and x86 will be at 10 nm or smaller.

    > While the MIPS seams the most likely candidate, if it doesn't work, another
    > power-efficient architecture will take the spot. It may be the Power8 and successors

    Not likely in case of POWER.

    https://morph.zone/modules/newbb_plus/viewtopic.php?forum=3&topic_id=9463&start=105

    > it may be possible to use the closest existing p6600-SoC to the specs and to add
    > the missing functionality with existing side chips.

    Yes, on the hardware side this would "only" leave board development.
  • »04.04.16 - 22:08
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    > ARM is pushed by [...] Intel.

    Is it?
  • »04.04.16 - 22:17
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    > using the GPU to [...] video demuxing

    ...wouldn't make sense, I believe.
  • »04.04.16 - 22:40
    Profile
  • Just looking around
    Posts: 15 from 2016/1/22
    > You can't hardware-accelerate everything.
    > There will always be problems that are best solved by code running on the
    > general-purpose CPU core(s) because it's impossible to build special
    > accelerators for every problem out there into a single chip (or on a single
    > board, or into a single system), especially ones to be solved by desktop
    > operating systems.
    Of course, you cannot hardware accelerate everything. However, many common tasks which can be hardware accelerated are still generally being done on the CPU. Examples include:
    USB physical, link, protocol and functional layers
    packing / unpacking IP data into ethernet frames
    Layer-4 protocol checksum offload (sometimes offered by ethernet ASICs but limited to TCP and, rarely, UDP due to restrictive drivers)
    DCT/IDCT/FFT/IFFT (sometimes offered by GPUs but limited to some image/video codecs due to restrictive drivers)
    If you take all the previous off the CPU, your CPU performance requirement goes down quite a bit. A fast clocked single core is probably enough for the remaining tasks.

    > The difference between a single-core operating system and a single
    > (SMP-capable) operating system is that the former can only use one single core
    > of a multicore CPU while the latter can use more than one core concurrently.
    There was a bit of confusion here, I do understand the difference between single core and multi-core. I was saying that I do not see any reason why licensors would require an upfront licence fee on single-core SoCs (but not multi core ones). You said it is becose it would only be usefull to one operating system but most, if not all SoCs are only used in one project (and therefore by one operating system).

    > Why destroy and not simply disable (see above)?
    It is to make sure no one develops software depending on unofficial capabilities which would be removed in successor machines.

    > Except that the MorphOS team is not a registered company.
    Where did you get this? As far as I understand the company is called MorphOS Development and headquartered in Wingate Park in South Africa.

    > Yes, on the hardware side this would "only" leave board development.
    This means that in the worst case, this is workable since there are Amiga-related companies which make boards.

    @thread
    Everybody does seem not to understand the point about power efficiency.
    Yes, I agree, for a desktop computer it is of low importance, for other uses, it is of utmost importance. For low-power devices, yes, ARM is efficient enough and the market may or may not go on with ARM only. It is however worth noting that some chinese companies chose the interAptiv/microAptiv (and will soon switch ti i6400/p6400) since the Chinese are less obsessed with doing as everyone else as people in the West. Imagination has stated that the MIPS market is growing. In the small performance world, everything except x86 has acceptable power efficiency at a given manufacturing process, including ARM and MIPS. Being one manufacturing process in advance to compensate its poor architecture has allowed Intel to push its x86 in the mobile world, but other foundries are catching up so this strategy will no longer work. So low-power world could either remain ARM-only (x86 is no competitor) or become ARM and MIPS.

    In the server world, as I already said, datacenter operators are fed-up by their energy bills. They are looking for a more power efficient architecture. Everybody hoped that ARM would be the answer, however, once the clockspeed and corenumber is scaled to match server processor total performances it has similar power-efficiency to x86 server processors. Other architectures can be much more power-efficient if done correctly, this includes power architecture, MIPS, SPARC, etc..

    In other words, there will either be a new architecture breaking into the market besides ARM/x86 for both small devices and servers, in which case the MIPS seems most likely as it is already used by chinese manufacturers and is supported by the android OS or there will be a new architecture for servers only, in which case the Power8 is one possible candidate. The first hypothesis is more likely since manufacturers will prefer a common architecture for both lightweight devices and servers to control costs. Of course, if the combined-use MIPS market does develop, the ARM will remain used as an alternative for mobile devices as it already does a satisfying job there.

    This doesn't change the fact that office computers will remain x86 based. The other markets will not be x86 based in the future (including a good chunk of the server market) and ARM cannot solve the server power problem.
  • »14.04.16 - 22:19
    Profile
  • Order of the Butterfly
    Order of the Butterfly
    Posts: 156 from 2004/11/18
    I'm not sure, because Ps4 and Xbox and the next Nintendo will use X64 and more and more tablets use it. Sure Intel and X64 in not dead for long time. Except in phones Intel chips are efficient for the calculating power they deliver. Only Gpus are more interesting. Using GPU for vidéo demuxing encoding for raytracing or for Gfx realtime effects is much like the original Amiga spirit. Use a chip to do what it do the best !

    [ Edited by acepeg 15.04.2016 - 02:09 ]
  • »14.04.16 - 23:06
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    >>> when everything is hardware accelerated, there is much less uses for several cores.

    >> You can't hardware-accelerate everything.

    > Of course, you cannot hardware accelerate everything.

    I'm glad I could convince you :-)

    >> Due to its single CPU core, the SoC wouldn't be popular with anything other than
    >> single-core operating systems such as Amiga-like OS. I doubt IP licensors would
    >> go without fixed licensing fee in this case.

    > I do not see any reason why licensors would require an upfront licence fee on
    > single-core SoCs (but not multi core ones).

    The reason is anticipated sales figures of the SoC. The IP licensor will be the more inclined to do licensing based on "cost per unit produced" (as you put it) the more sales he expects. A widely deployable SoC will generate more sales than a less-widely deployable SoC. Most popular operating systems can make use of multiple CPU cores through SMP, so that's where the market (and the money) is. A single-core SoC would run worse with the popular operating systems, so the IP licensor will anticipate less sales of the SoC, which in turn will make him demand a fixed licensing fee (that is higher than what he expects to get from a license based on "cost per unit produced").

    > You said it is becose it would only be usefull to one operating system

    No, I said it was because it would only be popular with single-core operating systems such as Amiga-like OS.

    > most, if not all SoCs are only used in one project (and therefore by one operating system).

    No, most SoCs are used in numerous projects and by a number of operating systems. The more projects a SoC is used in, the more successful it will be. And multicore SoCs will be preferred for projects aimed at popular SMP-capable operating systems.

    >> the MorphOS team is not a registered company.

    > Where did you get this?

    "The MorphOS development team is a group of individuals[...]"
    http://www.morphos-team.net/team

    > As far as I understand the company is called MorphOS Development and
    > headquartered in Wingate Park in South Africa.

    This is Mark Olsen's current home address. Former home addresses:

    http://web.archive.org/web/20080715161251/http://www.morphos-team.net/imprint.html
    http://web.archive.org/web/20081009233831/http://www.morphos-team.net/imprint.html
    http://web.archive.org/web/20110814104948/http://www.morphos-team.net/imprint.html

    What is the legal structure of this alleged registered "MorphOS Development" company?

    > p6400

    P6600.

    > there will either be a new architecture breaking into the market besides ARM/x86 for
    > both small devices and servers, in which case the MIPS seems most likely as it is
    > already used by chinese manufacturers and is supported by the android OS

    I don't see the connection between servers on the one hand and chinese manufacturers and Android on the other hand.

    > manufacturers will prefer a common architecture for both lightweight devices and servers
    > to control costs. [...] if the combined-use MIPS market does develop [...]

    Manufacturers of lightweight devices and manufacturers of servers hardly overlap. Lightweight devices and servers are distinct markets. Even if the same ISA was prevalent in both, it would be completely different SoCs/chips and even microarchitectures.
  • »14.04.16 - 23:53
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    >>> using the GPU to [...] video demuxing

    >> ...wouldn't make sense, I believe.

    > Using GPU for vidéo demuxing [...] is much like the original Amiga spirit.

    Does it really make sense to use the GPU for video demuxing? Does the GPU really have an advantage over the CPU there?
  • »15.04.16 - 00:37
    Profile
  • Jim
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Jim
    Posts: 4913 from 2009/1/28
    From: Delaware, USA
    Yes Andeas,
    Using the gpu to help decode info it will be displaying makes sense.
    Its used in most mainstream OS' and it lowers the cpu workload.
    Also, it's the only way some weak cpus like the one in the RPi can handle hd video.

    [ Edited by Jim 15.04.2016 - 02:23 ]
    "Never attribute to malice what can more readily explained by incompetence"
  • »15.04.16 - 02:24
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    amigadave
    Posts: 2736 from 2006/3/21
    From: Lake Arrowhead...
    Quote:

    Kronos wrote:
    @amigadave

    Offloading stuff to the GPU means providing something like OpenCL which would also largely benefit a potential x86 port.

    How much code could be shared between the different generations of ATI chips is offcourse a different question, but IMO if something like that would be started it would make perfect sense to start on R300/400 based cards that we allready use.


    I agree, but still think programming work to optimize video card drivers for existing supported video cards would be better done by programmers outside our existing MorphOS Dev. Team, so the Dev. Team can concentrate on other code directly porting MorphOS to X64 architecture.

    In other words, I believe it would be better to have MorphOS ported to X64 architecture with the same, or equal video card drivers as the current PPC version of MorphOS, sooner, rather than waiting longer for the X64 port of MorphOS because it also includes better video card drivers.
    MorphOS - The best Next Gen Amiga choice.
  • »15.04.16 - 05:37
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    >>> Using GPU for vidéo demuxing [...] is much like the original Amiga spirit.

    >> Does it really make sense to use the GPU for video demuxing? Does the GPU
    >> really have an advantage over the CPU there?

    > Yes Andeas, Using the gpu to help decode info it will be displaying makes
    > sense. Its used in most mainstream OS' and it lowers the cpu workload. Also,
    > it's the only way some weak cpus like the one in the RPi can handle hd video.

    Am I right in suspecting that you just changed the topic from demuxing to decoding? Demuxing multimedia files or streams means separating the video and audio information contained in a multimedia container and sending them to the respective decoders for actual decoding. Is that really done by the GPU these days? I mean, even if it's technically possible to have the GPU demux the data on its own, isn't the demuxing workload only a minuscule fraction of the decoding workload anyway?

    https://en.wikipedia.org/wiki/Demultiplexer_%28media_file%29
  • »15.04.16 - 09:05
    Profile
  • Jim
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Jim
    Posts: 4913 from 2009/1/28
    From: Delaware, USA
    Quote:

    Andreas_Wolf wrote:
    >>> Using GPU for vidéo demuxing [...] is much like the original Amiga spirit.

    >> Does it really make sense to use the GPU for video demuxing? Does the GPU
    >> really have an advantage over the CPU there?

    > Yes Andeas, Using the gpu to help decode info it will be displaying makes
    > sense. Its used in most mainstream OS' and it lowers the cpu workload. Also,
    > it's the only way some weak cpus like the one in the RPi can handle hd video.

    Am I right in suspecting that you just changed the topic from demuxing to decoding? Demuxing multimedia files or streams means separating the video and audio information contained in a multimedia container and sending them to the respective decoders for actual decoding. Is that really done by the GPU these days? I mean, even if it's technically possible to have the GPU demux the data on its own, isn't the demuxing workload only a minuscule fraction of the decoding workload anyway?

    https://en.wikipedia.org/wiki/Demultiplexer_%28media_file%29


    Good point Andreas.
    I have neglected how sound is separated.
    And in most of the hardware that I have owned I think sound is driven by the CPU out whatever sound device you use.

    Which makes me wonder how have video card manufacturers are driving the audio out portion of the hdmi signal.
    "Never attribute to malice what can more readily explained by incompetence"
  • »15.04.16 - 21:00
    Profile
  • Order of the Butterfly
    Order of the Butterfly
    Posts: 156 from 2004/11/18
    Sorry also it's decoding the good word. But i'm sure it will help for many things. I think that hardware accerated cairo should be awsome.

    [ Edited by acepeg 16.04.2016 - 00:35 ]
  • »15.04.16 - 21:34
    Profile
  • Yokemate of Keyboards
    Yokemate of Keyboards
    Andreas_Wolf
    Posts: 11053 from 2003/5/22
    From: Germany
    >> Am I right in suspecting that you just changed the topic from demuxing to decoding?

    > I have neglected how sound is separated.

    I take this as a 'yes' :-)
  • »15.04.16 - 23:39
    Profile