Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opencl: broadcast complex64 fail (osX) #46

Closed
davidbp opened this issue Aug 8, 2017 · 25 comments
Closed

opencl: broadcast complex64 fail (osX) #46

davidbp opened this issue Aug 8, 2017 · 25 comments

Comments

@davidbp
Copy link

davidbp commented Aug 8, 2017

Hello,

I installed the package with Pkg.add but some of the tests don´t pass (julia0.6)

My output is:

julia> Pkg.test("GPUArrays")
INFO: Computing test dependencies for GPUArrays...
INFO: No packages to install, update or remove
INFO: Testing GPUArrays
Test Summary: | Pass  Total
julia         |   48     48
broadcast Complex64: Test Failed
  Expression: ERROR (unhandled task failure): MethodError: no method matching unsafe_string(::Ptr{Void})
Closest candidates are:
  unsafe_string(::Cstring) at c.jl:79
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}) at strings/string.jl:39
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}, ::Integer) at strings/string.jl:35
Stacktrace:
 [1] macro expansion at /Users/davidbuchacaprats/.julia/v0.6/OpenCL/src/context.jl:148 [inlined]
 [2] (::OpenCL.cl.##43#44)() at ./task.jl:335
all((x->begin 
            x == angle(10.0f0im)
        end), Array(B))
Stacktrace:
 [1] macro expansion at /Users/davidbuchacaprats/.julia/v0.6/GPUArrays/test/opencl.jl:36 [inlined]
 [2] macro expansion at ./test.jl:856 [inlined]
 [3] anonymous at ./<missing>:?
Test Summary:                        | Pass  Fail  Total
opencl                               |   44     1     45
  broadcast Float32                  |    5            5
  broadcast Complex64                |    4     1      5
  Custom kernel from Julia function  |    1            1
  Custom kernel from string function |    1            1
  transpose                          |    1            1
  mapreduce Float32 (4048,)          |    4            4
  mapreduce Int32 (4048,)            |    4            4
  mapreduce Float32 (1024, 1024)     |    4            4
  mapreduce Int32 (1024, 1024)       |    4            4
  mapreduce Float32 (77,)            |    4            4
  mapreduce Int32 (77,)              |    4            4
  mapreduce Float32 (1923, 209)      |    4            4
  mapreduce Int32 (1923, 209)        |    4            4
ERROR: LoadError: Some tests did not pass: 44 passed, 1 failed, 0 errored, 0 broken.
while loading /Users/davidbuchacaprats/.julia/v0.6/GPUArrays/test/runtests.jl, in expression starting on line 24
==================================[ ERROR: GPUArrays ]==================================

failed process: Process(`/Applications/Julia-0.6.app/Contents/Resources/julia/bin/julia -Ccore2 -J/Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=yes --compilecache=yes /Users/davidbuchacaprats/.julia/v0.6/GPUArrays/test/runtests.jl`, ProcessExited(1)) [1]

========================================================================================
INFO: No packages to install, update or remove
ERROR: GPUArrays had test errors

Any ideas how to solve it ?

@davidbp davidbp changed the title opencl and broadcast complex64 fail (osX) opencl: broadcast complex64 fail (osX) Aug 8, 2017
@SimonDanisch
Copy link
Member

Try it on GPUArrays master! There are tons of bug fixes ;)
Although this looks a bit like it won't be fixed.
What GPU do you have?
show(CLBackend.init()) should print that!
Make sure to do a Pkg.update - I also fixed some bugs in dependencies.

@jpsamaroo
Copy link
Member

If you're running this on a recent-ish Mac, then it's very likely that none of the Float64 tests will pass. This is because some of the Intel Iris GPUs don't support Float64 natively, and apparently Apple never thought to include a software implementation in their OpenCL drivers.

I recommend taking Simon's suggestion and determining what GPU is being used, and then googling to see if it has Float64 support. You could also run clinfo and see what the driver reports.

@SimonDanisch
Copy link
Member

SimonDanisch commented Aug 9, 2017

Complex64 is actually Float32 (64 total bits) ;) I currently don't have tests for Float64 in GPUArrays, since it's too troublesome on most hardware that I'm trying to support... There are only a few, very expensive GPUs which don't completely suck with Float64 in compute bound situations - even a Titan X for 1600$ is 34x slower for Float64 in that case... So for most problems on most GPUs, you'll find the CPU to be faster! Until someone really pushes for that use case, I will go easy on Float64 support.

@davidbp
Copy link
Author

davidbp commented Aug 11, 2017

Hello again,

I just installed GPUArrays in another machine and did a Pkg.update()

I can't import GPUArrays:

julia> using GPUArrays
INFO: Precompiling module GPUArrays.
ERROR: LoadError: LoadError: LoadError: LoadError: UndefVarError: CLTranspiler not defined
Stacktrace:
 [1] include_from_node1(::String) at ./loading.jl:569
 [2] include(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [3] include_from_node1(::String) at ./loading.jl:569
 [4] include(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [5] include_from_node1(::String) at ./loading.jl:569
 [6] include(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [7] include_from_node1(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [8] include(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [9] anonymous at ./<missing>:2
while loading /Users/macpro/.julia/v0.6/GPUArrays/src/backends/opencl/opencl.jl, in expression starting on line 17
while loading /Users/macpro/.julia/v0.6/GPUArrays/src/backends/supported_backends.jl, in expression starting on line 9
while loading /Users/macpro/.julia/v0.6/GPUArrays/src/backends/backends.jl, in expression starting on line 52
while loading /Users/macpro/.julia/v0.6/GPUArrays/src/GPUArrays.jl, in expression starting on line 10
ERROR: Failed to precompile GPUArrays to /Users/macpro/.julia/lib/v0.6/GPUArrays.ji.
Stacktrace:
 [1] compilecache(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [2] _require(::Symbol) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [3] require(::Symbol) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?

My julia version is

julia> versioninfo()
Julia Version 0.6.0
Commit 903644385b (2017-06-19 13:05 UTC)

I tested ArrayFire in the same machine (with julia 05 though) and it seemed to work the GPU backend.

Anyone tried to use it in a mac machine?

@SimonDanisch
Copy link
Member

Please do a Pkg.checkout("GPUArrays") ;)

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

Thank you Simon, i thought by default Pkg.add installed the master branch.

I will play with it now. I can import the package but it seems no BLAS is installed.

julia> aux_gpu = GPUArray(rand(Float32,100,100))

GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: 
100×100 Array{Float32,2}:

julia> aux_gpu * aux_gpu

ERROR: MethodError: no method matching blas_module(::GPUArrays.CLBackend.CLContext)
Closest candidates are:
  blas_module(::GPUArrays.JLBackend.JLContext) at /Users/macpro/.julia/v0.6/GPUArrays/src/backends/julia/julia.jl:108
  blas_module(::Union{GPUArrays.AbstractAccArray{T,1}, GPUArrays.AbstractAccArray{T,2}} where T) at /Users/macpro/.julia/v0.6/GPUArrays/src/backends/blas.jl:4
Stacktrace:
 [1] gemm! at /Users/macpro/.julia/v0.6/GPUArrays/src/backends/blas.jl:20 [inlined]
 [2] gemm_wrapper!(::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}, ::Char, ::Char, ::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}, ::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}) at ./linalg/matmul.jl:367
 [3] A_mul_B!(::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}, ::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}, ::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}) at ./linalg/matmul.jl:148
 [4] *(::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}, ::GPUArrays.GPUArray{Float32,2,OpenCL.cl.Buffer{Float32},GPUArrays.CLBackend.CLContext}) at ./linalg/matmul.jl:146
``

@SimonDanisch
Copy link
Member

CLBLAS should be in the require and installed automatically:
https://github.com/JuliaGPU/GPUArrays.jl/blob/master/REQUIRE#L16
Can you make sure that it's installed correctly and that Pkg.test("CLBLAS") succeeds?
After that, run Pkg.build("GPUArrays") again!

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

Hi Simon, Pkg.test("CLBLAS") It is not passed. Thanks a lot for your time :)

julia> Pkg.test("CLBLAS")
INFO: Computing test dependencies for CLBLAS...
INFO: No packages to install, update or remove
INFO: Testing CLBLAS
ERROR: LoadError: LoadError: CLBLAS not properly installed. Please run Pkg.build("CLBLAS") then restart Julia.
Stacktrace:
 [1] error(::String) at /Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 [2] include_from_node1(::String) at ./loading.jl:569
while loading /Users/macpro/.julia/v0.6/CLBLAS/src/CLBLAS.jl, in expression starting on line 12
while loading /Users/macpro/.julia/v0.6/CLBLAS/test/runtests.jl, in expression starting on line 6
=============================================================[ ERROR: CLBLAS ]=============================================================

failed process: Process(`/Applications/Julia-0.6.app/Contents/Resources/julia/bin/julia -Cnative -J/Applications/Julia-0.6.app/Contents/Resources/julia/lib/julia/sys.dylib --compile=yes --depwarn=yes --check-bounds=yes --code-coverage=none --color=yes --compilecache=yes /Users/macpro/.julia/v0.6/CLBLAS/test/runtests.jl`, ProcessExited(1)) [1]

===========================================================================================================================================
INFO: No packages to install, update or remove
ERROR: CLBLAS had test errors

If I do Pkg.build("CLBLAS") I get

INFO: Building CLBLAS
=============================================================[ ERROR: CLBLAS ]=============================================================

LoadError:     OSX not oficially supported.
    Find manual build instructions on: https://github.com/clMathLibraries/clBLAS/wiki/Build

while loading /Users/macpro/.julia/v0.6/CLBLAS/deps/build.jl, in expression starting on line 38

===========================================================================================================================================

=============================================================[ BUILD ERRORS ]==============================================================

WARNING: CLBLAS had build errors.

 - packages with build errors remain installed in /Users/macpro/.julia/v0.6
 - build the package(s) and all dependencies with `Pkg.build("CLBLAS")`
 - build a single package by running its `deps/build.jl` script

===========================================================================================================================================

@SimonDanisch
Copy link
Member

Ah, of course...
Can you try: JuliaGPU/CLBLAS.jl#34
I'm not on OSX and no one ever verified to me, that it works, so it's still unmerged...

@SimonDanisch
Copy link
Member

SimonDanisch commented Aug 12, 2017

No wait! This isn't the right branch!
Edit:
it's the right branch, try it out! ;)
Got confused by JuliaGPU/CLBLAS.jl#35 being merged, but it was actually merged into JuliaGPU/CLBLAS.jl#34...

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

What exacly should I try?

@SimonDanisch
Copy link
Member

checking out the branch JuliaGPU/CLBLAS.jl#34.
In other words:

Pkg.checkout("CLBLAS", "sd/osx")
Pkg.build("CLBLAS")
Pkg.test("CLBLAS")
Pkg.build("GPUArrays")

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

Thank you for making it explicit.
Now I get an "error" but it turns out that at least I can do a matrix multiplication and the result seems to match the one with standard arrays.
This is What I get:

aux_gpu * aux_gpu
GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: ERROR (unhandled task failure): MethodError: no method matching unsafe_string(::Ptr{Void})
Closest candidates are:
  unsafe_string(::Cstring) at c.jl:79
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}) at strings/string.jl:39
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}, ::Integer) at strings/string.jl:35
Stacktrace:
 [1] macro expansion at /Users/macpro/.julia/v0.6/OpenCL/src/context.jl:148 [inlined]
 [2] (::OpenCL.cl.##43#44)() at ./task.jl:335

Should we suggest the CLBLAS repo to add the lines you just mentioned

Pkg.checkout("CLBLAS", "sd/osx")
Pkg.build("CLBLAS")
Pkg.test("CLBLAS")
Pkg.build("GPUArrays")

for mac users?
It´s not straightforward to find the issue and extract what you should do from there.

@SimonDanisch
Copy link
Member

No, I should just merge that fix, once people confirm it works ;) As I said, it's just an issue of me not being able to test it myself...

@SimonDanisch
Copy link
Member

Now I get an "error" but it turns out that at least I can do a matrix multiplication and the result seems to match the one with standard arrays.
This is What I get:

I'm not sure if I understand correctly. Matrix multiplication works, but what is aux_gpu * aux_gpu then?

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

aux_gpu * aux_gpu is a matrix multiplication which prints what I wrote and afterwards prints the multiplication result.

Anyway, I tried the "usage" code form the README of the repo:

This is what I get

julia> using GPUArrays

julia> a = GPUArray(rand(Float32, 32, 32)) # can be constructed from any Julia Array

GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: 
32×32 Array{Float32,2}:
 0.8526     0.828882   0.410319    0.294332   0.496524   0.354959      0.271756   0.454393   0.101808    0.753966     0.119721 
 0.668502   0.0138408  0.783639    0.555318   0.714958   0.10345        0.681111   0.725303   0.419619    0.147732     0.621417 
 0.0133333  0.37887    0.845415    0.265768   0.18437    0.91373        0.0387064  0.313273   0.242382    0.885029     0.408966 
 0.392569   0.647971   0.612181    0.18388    0.254014   0.352523       0.40085    0.962973   0.396621    0.527559     0.349164 
 0.24956    0.994419   0.465072    0.129543   0.4251     0.232478       0.727125   0.557677   0.331994    0.0165426    0.865789 
 0.319442   0.740079   0.188125    0.375317   0.69097    0.120778      0.783558   0.998434   0.251169    0.281023     0.37682  
 0.610028   0.579895   0.0684149   0.776998   0.319028   0.245571       0.260202   0.180576   0.3988      0.00720394   0.0876108
 0.192477   0.134874   0.889971    0.846958   0.483341   0.725356       0.6675     0.597139   0.121438    0.839045     0.534807 
 0.631605   0.648202   0.829811    0.738056   0.460072   0.128489       0.686163   0.602775   0.989145    0.21361      0.034345 
 0.820056   0.0356059  0.989887    0.0139102  0.532299   0.402801       0.0987097  0.203645   0.0514847   0.63812      0.51363  
 0.726396   0.684648   0.950483    0.399697   0.776274   0.520068      0.741487   0.264837   0.693549    0.88096      0.221619 
 0.351261   0.607302   0.00869882  0.710023   0.137229   0.159771       0.335021   0.808816   0.92578     0.00580692   0.855118 
 0.688691   0.365853   0.616635    0.997484   0.207064   0.694484       0.168055   0.28018    0.852673    0.142638     0.978169 
 0.578787   0.587166   0.649862    0.504511   0.728895   0.167699       0.776585   0.748844   0.385694    0.343067     0.721755 
 0.56056    0.717295   0.205672    0.734374   0.0761156  0.156196       0.169294   0.371328   0.859021    0.59309      0.754909 
 0.788628   0.17211    0.511063    0.671952   0.571855   0.701003      0.929188   0.543386   0.563988    0.797184     0.813253 
 0.140813   0.352359   0.500395    0.301426   0.798576   0.830171       0.26559    0.0817918  0.606528    0.934399     0.754155 
 0.566246   0.513462   0.340291    0.861001   0.422198   0.530339       0.549933   0.561354   0.292444    0.281215     0.459693 
 0.378882   0.71361    0.939381    0.563725   0.360131   0.0626702      0.986385   0.185383   0.0862225   0.000954032  0.508835 
 0.33939    0.171329   0.91829     0.965012   0.624559   0.00861001     0.184705   0.681454   0.442395    0.856057     0.547363 
 0.0328063  0.478303   0.116327    0.0189136  0.131966   0.228908      0.48731    0.99414    0.652825    0.536218     0.564713 
 0.246483   0.955846   0.812401    0.03148    0.67927    0.873385       0.126586   0.740071   0.745962    0.264236     0.188004 
 0.941187   0.752325   0.552778    0.119152   0.194946   0.796239       0.777131   0.67059    0.774578    0.0944111    0.615276 
 0.645765   0.274755   0.791053    0.427404   0.732382   0.454805       0.540824   0.602891   0.712069    0.68719      0.803229 
 0.718685   0.174647   0.740377    0.0836936  0.906806   0.74175        0.322192   0.947297   0.763705    0.9027       0.095286 
 0.700376   0.220301   0.426202    0.7455     0.906587   0.39385       0.329603   0.913565   0.132946    0.270736     0.307164 
 0.016382   0.0719987  0.301054    0.240427   0.266079   0.0538012      0.937505   0.169998   0.197188    0.771665     0.986263 
 0.83616    0.144707   0.0582681   0.847356   0.680536   0.946624       0.658509   0.49626    0.00778341  0.792982     0.178479 
 0.12863    0.870089   0.528964    0.432981   0.413141   0.184069       0.376693   0.844899   0.0581851   0.166544     0.361133 
 0.354576   0.87753    0.255831    0.349911   0.840057   0.031539       0.0679173  0.623514   0.829419    0.681786     0.196934 
 0.0993627  0.175077   0.536967    0.532961   0.764468   0.153128      0.103352   0.559932   0.395624    0.774054     0.765194 
 0.797886   0.255804   0.0554874   0.321783   0.388986   0.645784       0.453057   0.946708   0.417029    0.803092     0.347245 

julia> b = similar(a) # similar and other Julia.Base operations are defined
GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: 
32×32 Array{Float32,2}:
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32    NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32
 NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32     NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32  NaN32

julia> b .= a .+ 1f0 # broadcast in action, only works on 0.6 for .+. on 0.5 do: b .= (+).(a, 1f0)!


GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: ERROR (unhandled task failure): MethodError: no method matching unsafe_string(::Ptr{Void})
Closest candidates are:
  unsafe_string(::Cstring) at c.jl:79
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}) at strings/string.jl:39
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}, ::Integer) at strings/string.jl:35
Stacktrace:
 [1] macro expansion at /Users/macpro/.julia/v0.6/OpenCL/src/context.jl:148 [inlined]
 [2] (::OpenCL.cl.##43#44)() at ./task.jl:335

32×32 Array{Float32,2}:
 1.8526   1.82888  1.41032  1.29433  1.49652  1.35496  1.37915    1.4597   1.16213  1.27176  1.45439  1.10181  1.75397  1.11972
 1.6685   1.01384  1.78364  1.55532  1.71496  1.10345  1.47141     1.19249  1.41987  1.68111  1.7253   1.41962  1.14773  1.62142
 1.01333  1.37887  1.84542  1.26577  1.18437  1.91373  1.88233     1.66861  1.83128  1.03871  1.31327  1.24238  1.88503  1.40897
 1.39257  1.64797  1.61218  1.18388  1.25401  1.35252  1.47743     1.76624  1.34569  1.40085  1.96297  1.39662  1.52756  1.34916
 1.24956  1.99442  1.46507  1.12954  1.4251   1.23248  1.08161     1.8423   1.25668  1.72712  1.55768  1.33199  1.01654  1.86579
 1.31944  1.74008  1.18813  1.37532  1.69097  1.12078  1.00746    1.74595  1.80277  1.78356  1.99843  1.25117  1.28102  1.37682
 1.61003  1.57989  1.06841  1.777    1.31903  1.24557  1.68        1.92728  1.03573  1.2602   1.18058  1.3988   1.0072   1.08761
 1.19248  1.13487  1.88997  1.84696  1.48334  1.72536  1.47798     1.79054  1.07074  1.6675   1.59714  1.12144  1.83904  1.53481
 1.63161  1.6482   1.82981  1.73806  1.46007  1.12849  1.28649     1.78977  1.6926   1.68616  1.60277  1.98914  1.21361  1.03435
 1.82006  1.03561  1.98989  1.01391  1.5323   1.4028   1.43128     1.29475  1.36865  1.09871  1.20364  1.05148  1.63812  1.51363
 1.7264   1.68465  1.95048  1.3997   1.77627  1.52007  1.60799    1.04395  1.18456  1.74149  1.26484  1.69355  1.88096  1.22162
 1.35126  1.6073   1.0087   1.71002  1.13723  1.15977  1.49262     1.29803  1.44563  1.33502  1.80882  1.92578  1.00581  1.85512
 1.68869  1.36585  1.61664  1.99748  1.20706  1.69448  1.2656      1.32905  1.47509  1.16805  1.28018  1.85267  1.14264  1.97817
 1.57879  1.58717  1.64986  1.50451  1.7289   1.1677   1.74565     1.0117   1.6114   1.77659  1.74884  1.38569  1.34307  1.72176
 1.56056  1.7173   1.20567  1.73437  1.07612  1.1562   1.64951     1.86816  1.58015  1.16929  1.37133  1.85902  1.59309  1.75491
 1.78863  1.17211  1.51106  1.67195  1.57186  1.701    1.31517    1.42536  1.1028   1.92919  1.54339  1.56399  1.79718  1.81325
 1.14081  1.35236  1.5004   1.30143  1.79858  1.83017  1.90996     1.65978  1.73702  1.26559  1.08179  1.60653  1.9344   1.75416
 1.56625  1.51346  1.34029  1.861    1.4222   1.53034  1.02101     1.57473  1.71658  1.54993  1.56135  1.29244  1.28122  1.45969
 1.37888  1.71361  1.93938  1.56373  1.36013  1.06267  1.18351     1.73497  1.67487  1.98639  1.18538  1.08622  1.00095  1.50884
 1.33939  1.17133  1.91829  1.96501  1.62456  1.00861  1.76839     1.02341  1.95641  1.18471  1.68145  1.44239  1.85606  1.54736
 1.03281  1.4783   1.11633  1.01891  1.13197  1.22891  1.65222    1.76258  1.83219  1.48731  1.99414  1.65283  1.53622  1.56471
 1.24648  1.95585  1.8124   1.03148  1.67927  1.87338  1.7782      1.372    1.02182  1.12659  1.74007  1.74596  1.26424  1.188  
 1.94119  1.75232  1.55278  1.11915  1.19495  1.79624  1.89802     1.57992  1.36347  1.77713  1.67059  1.77458  1.09441  1.61528
 1.64576  1.27475  1.79105  1.4274   1.73238  1.4548   1.16008     1.33817  1.7257   1.54082  1.60289  1.71207  1.68719  1.80323
 1.71869  1.17465  1.74038  1.08369  1.90681  1.74175  1.00003     1.42699  1.1595   1.32219  1.9473   1.7637   1.9027   1.09529
 1.70038  1.2203   1.4262   1.7455   1.90659  1.39385  1.97269    1.66913  1.32865  1.3296   1.91356  1.13295  1.27074  1.30716
 1.01638  1.072    1.30105  1.24043  1.26608  1.0538   1.89969     1.90774  1.26831  1.93751  1.17     1.19719  1.77167  1.98626
 1.83616  1.14471  1.05827  1.84736  1.68054  1.94662  1.71105     1.06392  1.10272  1.65851  1.49626  1.00778  1.79298  1.17848
 1.12863  1.87009  1.52896  1.43298  1.41314  1.18407  1.42071     1.46836  1.16684  1.37669  1.8449   1.05819  1.16654  1.36113
 1.35458  1.87753  1.25583  1.34991  1.84006  1.03154  1.87381     1.24289  1.11397  1.06792  1.62351  1.82942  1.68179  1.19693
 1.09936  1.17508  1.53697  1.53296  1.76447  1.15313  1.52095    1.474    1.80063  1.10335  1.55993  1.39562  1.77405  1.76519
 1.79789  1.2558   1.05549  1.32178  1.38899  1.64578  1.54085     1.42818  1.65191  1.45306  1.94671  1.41703  1.80309  1.34725

julia> c = a * b # calls to BLAS
GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: 
32×32 Array{Float32,2}:
 23.5745  23.1854  24.8872  22.9736  24.1913  22.6373  23.0999    23.9644  22.6713  23.6591  25.0684  23.3236  23.5552  23.8991
 24.1761  24.0336  24.7657  23.6872  23.4855  22.9869  23.637      24.5349  22.5578  23.7406  24.8514  23.1478  24.2213  23.3153
 23.5454  23.8278  24.6382  23.3872  24.3475  22.8394  25.203      25.6136  24.5461  23.6922  26.1853  24.5342  24.3295  25.0049
 25.1946  24.1979  26.304   24.6966  25.6531  23.2637  25.4687     25.0926  24.2387  24.834   27.0442  24.6908  25.7599  25.0014
 21.2923  20.1689  21.6257  21.497   20.8858  20.2382  21.0931     21.7098  20.5572  21.6599  22.4567  19.8357  20.6031  21.0611
 23.7832  23.2633  24.6319  23.5105  24.3575  23.0299  24.1668    23.5717  22.3629  23.989   25.9483  23.19    23.6823  24.6138
 23.6937  23.7347  24.403   23.8702  23.6691  22.5832  24.1752     24.0262  22.3253  22.8331  24.6089  23.2243  22.8756  23.4062
 24.8318  23.7385  25.6085  25.142   25.3703  24.4798  24.7883     25.0663  24.084   23.798   26.6737  24.3092  25.9503  25.2205
 25.708   26.5103  27.2967  25.4633  27.0079  24.8937  27.6311     27.1637  25.2473  25.5047  28.354   26.2077  27.1026  26.3921
 21.4299  22.0982  23.515   21.8488  22.6457  21.0812  22.5087     22.8194  22.7465  21.4262  23.7832  21.9031  23.1641  22.6203
 23.4543  24.1322  24.9275  23.5317  24.2731  22.1792  23.4908    23.7734  23.1278  23.1375  24.4601  23.1979  23.6723  23.2748
 23.1492  23.1039  24.391   23.021   23.977   21.5591  24.0397     23.0558  22.4226  22.8752  24.7909  22.7309  23.5416  22.7744
 25.0481  25.1518  25.8434  24.5838  24.9094  23.9057  24.9699     25.4282  24.6153  24.6156  27.1618  24.8339  25.25    24.8336
 25.7879  25.7597  26.3797  24.6853  25.44    24.5383  26.2799     25.9879  24.6335  24.906   26.7398  25.4566  25.1696  26.0007
 23.9716  23.4098  24.7164  24.1038  24.9546  22.9029  25.795      24.5404  23.5406  23.059   26.3954  23.7617  24.926   24.4419
 21.2053  21.24    20.7215  20.7669  21.2293  19.6921  20.9084    21.0123  19.9982  20.9028  22.9037  20.2739  21.0392  20.3715
 25.6191  25.4816  25.8308  25.1975  25.7129  24.0618  26.9006     26.5119  25.0938  25.2328  27.6723  25.8882  25.4787  26.8281
 22.4133  21.9553  22.9229  21.8904  23.1548  21.5711  22.3028     22.7852  21.9841  22.25    24.4272  22.0689  23.1407  23.1006
 21.6671  19.6495  22.5516  21.3542  21.6178  20.9106  22.0537     21.0663  20.345   20.9166  22.4687  20.504   22.7433  21.4295
 26.5838  26.6299  28.4475  26.3963  26.7188  25.7514  27.8597     28.4093  26.4497  26.4014  29.1526  27.2553  28.1764  27.5824
 22.6376  22.9483  23.5326  24.8452  24.0026  22.0827  25.1118    24.0055  23.1977  23.1569  25.6926  23.6334  23.6134  24.4921
 24.0364  24.83    25.1572  23.5951  24.7065  22.9177  24.6648     24.7795  23.6049  24.2119  26.4699  24.4428  23.5125  24.4986
 23.1508  23.3658  22.8916  23.012   23.6599  22.0743  23.6951     22.923   21.8435  21.9922  23.9568  22.2555  22.7597  22.216 
 25.8447  26.0363  26.5178  26.5106  26.0018  24.0201  26.4531     26.3358  26.4526  25.9795  28.1944  25.5714  26.4178  27.2454
 26.1953  28.0487  28.292   26.8727  27.8201  25.2937  27.6155     27.465   27.0264  26.3956  29.1866  27.0313  27.3588  28.0008
 23.3423  23.9832  24.3826  23.469   23.922   22.4665  24.1212    24.5902  22.6775  23.214   26.2381  23.0343  23.529   24.3099
 21.4995  19.4693  20.712   20.3048  21.1341  20.3374  21.5649     20.6225  19.7261  20.5758  21.6572  20.0495  21.0412  20.6184
 21.3254  21.5302  22.6849  20.9457  22.2486  20.0939  20.8088     21.688   20.5335  21.4179  22.5373  19.7851  21.7529  20.9666
 18.1485  18.0176  19.3649  18.1204  18.749   17.4691  19.2744     18.4378  18.2736  18.0792  19.5635  17.5869  18.2732  18.8781
 23.3237  23.9434  24.2929  23.6818  23.8403  21.5358  24.5312     24.2635  23.6266  23.0714  25.0417  23.5064  23.0865  25.0745
 25.0352  25.3473  26.3143  25.577   24.9049  23.4697  25.657     26.2205  24.641   25.2403  26.7173  25.5146  24.8855  26.1828
 22.4591  23.4357  22.9762  23.6441  22.7305  20.5902  22.9604     23.7832  22.6977  22.9568  24.8487  22.638   22.1432  23.4478

julia> 

julia> 

julia> c = a * b # calls to BLAS
GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: 
32×32 Array{Float32,2}:
 23.5745  23.1854  24.8872  22.9736  24.1913  22.6373  23.0999    23.9644  22.6713  23.6591  25.0684  23.3236  23.5552  23.8991
 24.1761  24.0336  24.7657  23.6872  23.4855  22.9869  23.637      24.5349  22.5578  23.7406  24.8514  23.1478  24.2213  23.3153
 23.5454  23.8278  24.6382  23.3872  24.3475  22.8394  25.203      25.6136  24.5461  23.6922  26.1853  24.5342  24.3295  25.0049
 25.1946  24.1979  26.304   24.6966  25.6531  23.2637  25.4687     25.0926  24.2387  24.834   27.0442  24.6908  25.7599  25.0014
 21.2923  20.1689  21.6257  21.497   20.8858  20.2382  21.0931     21.7098  20.5572  21.6599  22.4567  19.8357  20.6031  21.0611
 23.7832  23.2633  24.6319  23.5105  24.3575  23.0299  24.1668    23.5717  22.3629  23.989   25.9483  23.19    23.6823  24.6138
 23.6937  23.7347  24.403   23.8702  23.6691  22.5832  24.1752     24.0262  22.3253  22.8331  24.6089  23.2243  22.8756  23.4062
 24.8318  23.7385  25.6085  25.142   25.3703  24.4798  24.7883     25.0663  24.084   23.798   26.6737  24.3092  25.9503  25.2205
 25.708   26.5103  27.2967  25.4633  27.0079  24.8937  27.6311     27.1637  25.2473  25.5047  28.354   26.2077  27.1026  26.3921
 21.4299  22.0982  23.515   21.8488  22.6457  21.0812  22.5087     22.8194  22.7465  21.4262  23.7832  21.9031  23.1641  22.6203
 23.4543  24.1322  24.9275  23.5317  24.2731  22.1792  23.4908    23.7734  23.1278  23.1375  24.4601  23.1979  23.6723  23.2748
 23.1492  23.1039  24.391   23.021   23.977   21.5591  24.0397     23.0558  22.4226  22.8752  24.7909  22.7309  23.5416  22.7744
 25.0481  25.1518  25.8434  24.5838  24.9094  23.9057  24.9699     25.4282  24.6153  24.6156  27.1618  24.8339  25.25    24.8336
 25.7879  25.7597  26.3797  24.6853  25.44    24.5383  26.2799     25.9879  24.6335  24.906   26.7398  25.4566  25.1696  26.0007
 23.9716  23.4098  24.7164  24.1038  24.9546  22.9029  25.795      24.5404  23.5406  23.059   26.3954  23.7617  24.926   24.4419
 21.2053  21.24    20.7215  20.7669  21.2293  19.6921  20.9084    21.0123  19.9982  20.9028  22.9037  20.2739  21.0392  20.3715
 25.6191  25.4816  25.8308  25.1975  25.7129  24.0618  26.9006     26.5119  25.0938  25.2328  27.6723  25.8882  25.4787  26.8281
 22.4133  21.9553  22.9229  21.8904  23.1548  21.5711  22.3028     22.7852  21.9841  22.25    24.4272  22.0689  23.1407  23.1006
 21.6671  19.6495  22.5516  21.3542  21.6178  20.9106  22.0537     21.0663  20.345   20.9166  22.4687  20.504   22.7433  21.4295
 26.5838  26.6299  28.4475  26.3963  26.7188  25.7514  27.8597     28.4093  26.4497  26.4014  29.1526  27.2553  28.1764  27.5824
 22.6376  22.9483  23.5326  24.8452  24.0026  22.0827  25.1118    24.0055  23.1977  23.1569  25.6926  23.6334  23.6134  24.4921
 24.0364  24.83    25.1572  23.5951  24.7065  22.9177  24.6648     24.7795  23.6049  24.2119  26.4699  24.4428  23.5125  24.4986
 23.1508  23.3658  22.8916  23.012   23.6599  22.0743  23.6951     22.923   21.8435  21.9922  23.9568  22.2555  22.7597  22.216 
 25.8447  26.0363  26.5178  26.5106  26.0018  24.0201  26.4531     26.3358  26.4526  25.9795  28.1944  25.5714  26.4178  27.2454
 26.1953  28.0487  28.292   26.8727  27.8201  25.2937  27.6155     27.465   27.0264  26.3956  29.1866  27.0313  27.3588  28.0008
 23.3423  23.9832  24.3826  23.469   23.922   22.4665  24.1212    24.5902  22.6775  23.214   26.2381  23.0343  23.529   24.3099
 21.4995  19.4693  20.712   20.3048  21.1341  20.3374  21.5649     20.6225  19.7261  20.5758  21.6572  20.0495  21.0412  20.6184
 21.3254  21.5302  22.6849  20.9457  22.2486  20.0939  20.8088     21.688   20.5335  21.4179  22.5373  19.7851  21.7529  20.9666
 18.1485  18.0176  19.3649  18.1204  18.749   17.4691  19.2744     18.4378  18.2736  18.0792  19.5635  17.5869  18.2732  18.8781
 23.3237  23.9434  24.2929  23.6818  23.8403  21.5358  24.5312     24.2635  23.6266  23.0714  25.0417  23.5064  23.0865  25.0745
 25.0352  25.3473  26.3143  25.577   24.9049  23.4697  25.657     26.2205  24.641   25.2403  26.7173  25.5146  24.8855  26.1828
 22.4591  23.4357  22.9762  23.6441  22.7305  20.5902  22.9604     23.7832  22.6977  22.9568  24.8487  22.638   22.1432  23.4478

@SimonDanisch
Copy link
Member

GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: ERROR (unhandled task failure): MethodError: no method matching unsafe_string(::Ptr{Void})
Closest candidates are:
  unsafe_string(::Cstring) at c.jl:79
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}) at strings/string.jl:39
  unsafe_string(::Union{Ptr{Int8}, Ptr{UInt8}}, ::Integer) at strings/string.jl:35
Stacktrace:
 [1] macro expansion at /Users/macpro/.julia/v0.6/OpenCL/src/context.jl:148 [inlined]
 [2] (::OpenCL.cl.##43#44)() at ./task.jl:335

@vchuravy do we still not have this fixed? :D I remember taking a stab at this, but it wasn't super intuitive what to do, so I moved it back in priority. Fixing this should be simple, no?

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

What about the NaN32 results in simliar. I guess that is not what I should get...

@SimonDanisch
Copy link
Member

No, that's fine, see the Julia Base doc for similar:

help?> similar
search: similar

  similar(array, [element_type=eltype(array)], [dims=size(array)])

  Create an uninitialized mutable array with the given element type and size,
  based upon the given source array. [...]

The OpenCL driver just does different things for uninitialized memory, so it might look odd for you coming from julia ;)

@SimonDanisch
Copy link
Member

About the error, can you try my new OpenCL branch?

Pkg.checkout("OpenCL", "sd/context_error")

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

Just tried to do the checkout:

julia> Pkg.checkout("OpenCL", "sd/context_error")

INFO: Checking out OpenCL sd/context_error...
ERROR: GitError(Code:ERROR, Class:Merge, There is no tracking information for the current branch.)
Stacktrace:
 [1] (::Base.LibGit2.##117#125{Base.LibGit2.GitRepo})(::Base.LibGit2.GitReference) at ./libgit2/libgit2.jl:709
 [2] with(::Base.LibGit2.##117#125{Base.LibGit2.GitRepo}, ::Base.LibGit2.GitReference) at ./libgit2/types.jl:608
 [3] #merge!#109(::String, ::String, ::Bool, ::Base.LibGit2.MergeOptions, ::Base.LibGit2.CheckoutOptions, ::Function, ::Base.LibGit2.GitRepo) at ./libgit2/libgit2.jl:706
 [4] (::Base.#kw##merge!)(::Array{Any,1}, ::Base.#merge!, ::Base.LibGit2.GitRepo) at ./<missing>:0
 [5] (::Base.Pkg.Entry.##16#18{String,String,Bool,Bool})(::Base.LibGit2.GitRepo) at ./pkg/entry.jl:230
 [6] transact(::Base.Pkg.Entry.##16#18{String,String,Bool,Bool}, ::Base.LibGit2.GitRepo) at ./libgit2/libgit2.jl:882
 [7] with(::Base.Pkg.Entry.##15#17{String,String,Bool,Bool}, ::Base.LibGit2.GitRepo) at ./libgit2/types.jl:608
 [8] checkout(::String, ::String, ::Bool, ::Bool) at ./pkg/entry.jl:226
 [9] (::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#checkout,Tuple{String,String,Bool,Bool}})() at ./pkg/dir.jl:36
 [10] cd(::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#checkout,Tuple{String,String,Bool,Bool}}, ::String) at ./file.jl:70
 [11] #cd#1(::Array{Any,1}, ::Function, ::Function, ::String, ::Vararg{Any,N} where N) at ./pkg/dir.jl:36
 [12] #checkout#1(::Bool, ::Bool, ::Function, ::String, ::String) at ./pkg/pkg.jl:188
 [13] checkout(::String, ::String) at ./pkg/pkg.jl:188

@SimonDanisch
Copy link
Member

Uhm, if you have git, maybe just check it out manually? No idea why Pkg doesn't like it....

@davidbp
Copy link
Author

davidbp commented Aug 12, 2017

I did it like this which seemed to work

julia> Pkg.checkout("OpenCL")
INFO: Checking out OpenCL master...
INFO: Pulling OpenCL latest master...
INFO: No packages to install, update or remove

julia> Pkg.checkout("OpenCL", "sd/context_error")
INFO: Checking out OpenCL sd/context_error...
INFO: Pulling OpenCL latest sd/context_error...
INFO: No packages to install, update or remove

Nevertheless same error appears

julia> b .= a .+ 1f0 # broadcast in action, only works on 0.6 for .+. on 0.5 do: b .= (+).(a, 1f0)!

GPUArray with ctx: CLContext: AMD Radeon HD - FirePro D300 Compute Engine: ERROR (unhandled task failure): OpenCL Error: OpenCL.Context error: 
Stacktrace:
 [1] raise_context_error(::String, ::String) at /Users/macpro/.julia/v0.6/OpenCL/src/context.jl:109
 [2] macro expansion at /Users/macpro/.julia/v0.6/OpenCL/src/context.jl:148 [inlined]
 [3] (::OpenCL.cl.##43#44)() at ./task.jl:335


32×32 Array{Float32,2}:
 1.70448  1.91852  1.32048  1.93167  1.52033    1.93081  1.09072  1.58307  1.49914  1.08932
 1.39251  1.26244  1.43798  1.11948  1.11554     1.73711  1.40498  1.7972   1.67021  1.10297
 1.99474  1.75247  1.21466  1.68113  1.92287     1.06316  1.6918   1.44683  1.62125  1.28932
 1.21543  1.3894   1.58425  1.11354  1.82618     1.88527  1.39238  1.97935  1.35522  1.79892
 1.14834  1.93428  1.28461  1.732    1.18455     1.35504  1.34004  1.77376  1.69728  1.47436
 1.06521  1.15936  1.93665  1.62544  1.08219    1.23311  1.68085  1.11341  1.05959  1.13956
 1.53228  1.38468  1.96888  1.18734  1.63338     1.10464  1.34928  1.1597   1.24712  1.69932
 1.18697  1.63697  1.44004  1.42134  1.92032     1.03755  1.46165  1.49082  1.60366  1.50918
 1.01852  1.09     1.59666  1.55698  1.66693     1.83023  1.40215  1.76891  1.24338  1.33064
 1.56579  1.28548  1.21417  1.13758  1.59256     1.58107  1.52362  1.5      1.70552  1.38332
 1.23115  1.64668  1.00132  1.43287  1.91936    1.21057  1.13018  1.7654   1.56661  1.70769
 1.41063  1.80233  1.31628  1.36327  1.48347     1.38104  1.07779  1.27807  1.66923  1.26219
 1.19647  1.34168  1.59091  1.64066  1.51769     1.77828  1.91895  1.92575  1.77569  1.38969
 1.79349  1.95093  1.8877   1.62068  1.46582     1.93641  1.93393  1.85289  1.77832  1.29929
 1.09185  1.0617   1.26775  1.33075  1.46967     1.21906  1.63234  1.72558  1.49838  1.12698
 1.9688   1.07016  1.54829  1.4585   1.61541    1.27418  1.16897  1.92076  1.96274  1.70945
 1.26923  1.70933  1.00907  1.39115  1.14036     1.85303  1.89914  1.10554  1.90671  1.47342
 1.00163  1.95887  1.12751  1.21146  1.32964     1.86329  1.30038  1.53991  1.54425  1.85412
 1.03363  1.62434  1.09133  1.39592  1.90573     1.89345  1.28812  1.56598  1.40704  1.21083
 1.14379  1.64692  1.27536  1.05565  1.15106     1.61291  1.91081  1.43512  1.65658  1.50305
 1.60731  1.33534  1.63576  1.23622  1.42272    1.99377  1.4538   1.49688  1.02568  1.7597 
 1.41938  1.15261  1.17086  1.83804  1.81429     1.67256  1.96677  1.23113  1.58721  1.12076
 1.31232  1.07035  1.78565  1.37776  1.91147     1.12882  1.32026  1.41219  1.28093  1.09403
 1.66476  1.54674  1.27874  1.12277  1.79261     1.26881  1.33534  1.76601  1.5362   1.42416
 1.50834  1.51135  1.83748  1.49609  1.56328     1.99764  1.56276  1.42145  1.65368  1.30209
 1.30307  1.27332  1.60497  1.26231  1.54869    1.73748  1.83578  1.92973  1.85151  1.77106
 1.98003  1.55736  1.74913  1.40137  1.62451     1.99169  1.63854  1.48879  1.60847  1.70785
 1.97146  1.40501  1.89441  1.408    1.63046     1.06376  1.11392  1.60981  1.62065  1.55862
 1.56116  1.10533  1.27758  1.11141  1.20649     1.83951  1.41215  1.55512  1.49021  1.56456
 1.47281  1.29182  1.4433   1.2862   1.82585     1.61482  1.44809  1.20623  1.16951  1.45137
 1.79081  1.86249  1.732    1.66182  1.70263    1.74479  1.59936  1.59748  1.24223  1.68003
 1.33762  1.08826  1.41437  1.40411  1.21511     1.70277  1.79025  1.27623  1.91856  1.8734 

EDIT: Its not the same error. Anyway The result is printed.

@SimonDanisch
Copy link
Member

Okay, this is now finally the correct error, but is probably no error since the error string is empty! Let me wait for @vchuravy opinion, this context error handling code is pretty confusing and I wouldn't be surprised, if it's actually calling this piece of code upon initialization.

@maleadt
Copy link
Member

maleadt commented Jan 28, 2020

OpenCL-support is not part of this repository (and has not been maintained on any recent Julia version 😞).

@maleadt maleadt closed this as completed Jan 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants