Is Blt*Mask*BitMapRastPort() hardware accelerated?
  • Order of the Butterfly
    Order of the Butterfly
    ChrisH
    Posts: 167 from 2009/11/26
    Just a quick question. Is Blt*Mask*BitMapRastPort() hardware accelerated?

    If not, does MorphOS have something similar that is h/w accelerated?
    Author of the PortablE programming language.
    It is pitch black. You are likely to be eaten by a grue...
  • »13.03.12 - 09:42
    Profile Visit Website
  • MorphOS Developer
    itix
    Posts: 1516 from 2003/2/24
    From: Finland
    ChrisH,

    Nope. I think the best alternative method use BltBitMapAlpha() or BltBitMapRastPortAlpha(). They are hardware accelerated where supported (most gfx cards). WritePixelArrayAlpha() is only altivec accelerated.
    1 + 1 = 3 with very large values of 1
  • »13.03.12 - 09:56
    Profile
  • Priest of the Order of the Butterfly
    Priest of the Order of the Butterfly
    Crumb
    Posts: 730 from 2003/2/24
    From: aGaS & CUAZ Al...
    @Itix

    That's odd... why these "Mask" functions don't use accelerated "Alpha" ones internally?
  • »13.03.12 - 10:20
    Profile Visit Website
  • MorphOS Developer
    cyfm
    Posts: 537 from 2003/4/11
    From: Germany
    Crumb,
    Quote:


    That's odd... why these "Mask" functions don't use accelerated "Alpha" ones internally?



    This call has not been updated for quite a while because its most popular use for it has been the legacy Workbench icon display in the past and there has been not much use for it beyond that scope. Ambient uses the alpha (or even non-alpha) blitting calls.
  • »13.03.12 - 11:13
    Profile Visit Website
  • Order of the Butterfly
    Order of the Butterfly
    ChrisH
    Posts: 167 from 2009/11/26
    @Crumb
    I imagine that h/w accelerating Mask functions would not be efficient - every time you called BltMaskBitMapRastPort(), it would have to convert the supplied 1-bit mask bitmap (which could have changed) into an 8-bit (or more) alpha channel bitmap.

    Since masks are stored in non-video ram, and alpha bitmaps are stored in video ram, that would involve the CPU reading every pixel of the mask bitmap & making a write to the alpha bitmap. If you were going to do that, it wouldn't be much slower for the CPU to just write the actual (unmasked) pixels to the bitmap target of BltMaskBitMapRastPort(). i.e. what it is already doing.
    Author of the PortablE programming language.
    It is pitch black. You are likely to be eaten by a grue...
  • »14.03.12 - 11:31
    Profile Visit Website
  • Order of the Butterfly
    Order of the Butterfly
    ChrisH
    Posts: 167 from 2009/11/26
    @itix
    Thanks for the pointer. It looks like it is impossible for BltBitMapAlpha() to use a separate 8-bit alpha-channel-only bitmap? Although I can't even see a way to allocate such a bitmap using CyberGraphics anyway.

    [ Edited by ChrisH 14.03.2012 - 09:50 ]
    Author of the PortablE programming language.
    It is pitch black. You are likely to be eaten by a grue...
  • »14.03.12 - 11:45
    Profile Visit Website
  • MorphOS Developer
    itix
    Posts: 1516 from 2003/2/24
    From: Finland
    Right, alpha channel must be part of source bitmap. If you are operating with true colour bitmaps it is not problem but rules out using LUT or hicolor bitmaps.

    CGX supports using alpha from destination bitmap but I dont know if this operation is HW accelerated.
    1 + 1 = 3 with very large values of 1
  • »14.03.12 - 12:13
    Profile