diff --git a/README.md b/README.md index 294403a..08baaa8 100644 --- a/README.md +++ b/README.md @@ -14,21 +14,19 @@ There are top-level methods `curve_fit()` and `margin_error()` that are useful f using LsqFit # a two-parameter exponential model -# x: array of independent variables +# x: independent variable # p: array of model parameters -# model(x, p) will accept the full data set as the first argument `x`. -# This means that we need to write our model function so it applies -# the model to the full dataset. We use `@.` to apply the calculations -# across all rows. -@. model(x, p) = p[1]*exp(-x*p[2]) +# model(x, p) will accept the independent variable as the first argument `x`. +model(x, p) = p[1]*exp(-x*p[2]) ``` -The function applies the per observation function `p[1]*exp(-x[i]*p[2])` to the full dataset in `x`, with `i` denoting an observation row. We simulate some data and chose our "true" parameters. +We simulate some data and chose our "true" parameters. + ```julia # some example data # xdata: independent variables # ydata: dependent variable xdata = range(0, stop=10, length=20) -ydata = model(xdata, [1.0 2.0]) + 0.01*randn(length(xdata)) +ydata = model.(xdata, Ref([1.0 2.0])) + 0.01*randn(length(xdata)) p0 = [0.5, 0.5] ``` Now, we're ready to fit the model. @@ -56,10 +54,11 @@ confidence_inter = confidence_interval(fit, 0.05) # The finite difference method is used above to approximate the Jacobian. # Alternatively, a function which calculates it exactly can be supplied instead. +# It should return a vector `J_i = df/d(p_i)` function jacobian_model(x,p) - J = Array{Float64}(undef, length(x), length(p)) - @. J[:,1] = exp(-x*p[2]) #dmodel/dp[1] - @. @views J[:,2] = -x*p[1]*J[:,1] #dmodel/dp[2], thanks to @views we don't allocate memory for the J[:,1] slice + J = Array{Float64}(undef, length(p)) + J[1] = exp(-x*p[2]) #dmodel/dp[1] + J[2] = -x*p[1]*J[1] #dmodel/dp[2] J end fit = curve_fit(model, jacobian_model, xdata, ydata, p0) @@ -67,10 +66,19 @@ fit = curve_fit(model, jacobian_model, xdata, ydata, p0) Multivariate regression ----------------------- -There's nothing inherently different if there are more than one variable entering the problem. We just need to specify the columns appropriately in our model specification: +There's nothing inherently different if there are more than one variable +entering the problem. The argument `x` must just be a vector: +```julia +multimodel(x, p) = p[1]*exp(-x[1]*p[2]+x[2]*p[3]) +``` +and the `x` argument to the multivariate `curve_fit` function should be a +vector of such input vectors. ```julia -@. multimodel(x, p) = p[1]*exp(-x[:, 1]*p[2]+x[:, 2]*p[3]) +x = [randn(2) for _ in 1:50] +y = multimodel.(x, Ref([1.0, 2.0, 1.0])) + randn(50) +fit = curve_fit(multimodel, x, y, [1.0, 1.0, 1.0]) ``` + Evaluating the Jacobian and using automatic differentiation ------------------------- The default is to calculate the Jacobian using a central finite differences scheme if no Jacobian function is provided. The default is to use central differences because it can be more accurate than forward finite differences, but at the expense of computational cost. It is possible to switch to forward finite differences, like MINPACK uses for example, by specifying `autodiff=:finiteforward`: @@ -85,15 +93,18 @@ Here, you have to be careful not to manually restrict any types in your code to, In-place model and Jacobian ------------------------- -It is possible to either use an in-place model, or an in-place model *and* an in-place Jacobian. It might be pertinent to use this feature when `curve_fit` is slow, or consumes a lot of memory. +It is possible to either use an in-place model, or an in-place model *and* an +in-place Jacobian. It might be pertinent to use this feature when `curve_fit` +is slow, or consumes a lot of memory. An in-place model must operate on the +entire input vector, not individual elements. ``` -model_inplace(F, x, p) = (@. F = p[1] * exp(-x * p[2])) +model_inplace!(F, x, p) = (@. F = p[1] * exp(-x * p[2])) -function jacobian_inplace(J::Array{Float64,2},x,p) +function jacobian_inplace!(J::Array{Float64,2},x,p) @. J[:,1] = exp(-x*p[2]) @. @views J[:,2] = -x*p[1]*J[:,1] end -fit = curve_fit(model_inplace, jacobian_inplace, xdata, ydata, p0; inplace = true) +fit = curve_fit(model_inplace!, jacobian_inplace!, xdata, ydata, p0; inplace = true) ``` Geodesic acceleration @@ -161,7 +172,7 @@ This performs a fit using a non-linear iteration to minimize the (weighted) resi `sigma = stderror(fit; atol, rtol)`: -* `fit`: result of curve_fit (a `LsqFitResult` type) +* `fit`: result of `curve_fit` (a `LsqFitResult` type) * `atol`: absolute tolerance for negativity check * `rtol`: relative tolerance for negativity check @@ -171,7 +182,7 @@ If no weights are provided for the fits, the variance is estimated from the mean `margin_of_error = margin_error(fit, alpha=0.05; atol, rtol)`: -* `fit`: result of curve_fit (a `LsqFitResult` type) +* `fit`: result of `curve_fit` (a `LsqFitResult` type) * `alpha`: significance level * `atol`: absolute tolerance for negativity check * `rtol`: relative tolerance for negativity check @@ -180,7 +191,7 @@ This returns the product of standard error and critical value of each parameter `confidence_interval = confidence_interval(fit, alpha=0.05; atol, rtol)`: -* `fit`: result of curve_fit (a `LsqFitResult` type) +* `fit`: result of `curve_fit` (a `LsqFitResult` type) * `alpha`: significance level * `atol`: absolute tolerance for negativity check * `rtol`: relative tolerance for negativity check diff --git a/docs/src/getting_started.md b/docs/src/getting_started.md index 834be30..4c6a125 100644 --- a/docs/src/getting_started.md +++ b/docs/src/getting_started.md @@ -16,9 +16,9 @@ m(t, \boldsymbol{p}) = p_1 \exp(-p_2 t) ``` ```julia -julia> # t: array of independent variable +julia> # t: independent variable julia> # p: array of model parameters -julia> model(t, p) = p[1] * exp.(-p[2] * t) +julia> model(t, p) = p[1] * exp(-p[2] * t) ``` For illustration purpose, we generate some fake data. @@ -27,7 +27,7 @@ For illustration purpose, we generate some fake data. julia> # tdata: data of independent variable julia> # ydata: data of dependent variable julia> tdata = range(0, stop=10, length=20) -julia> ydata = model(tdata, [1.0 2.0]) + 0.01*randn(length(tdata)) +julia> ydata = model.(tdata, Ref([1.0 2.0])) + 0.01*randn(length(tdata)) ``` Before fitting the data, we also need a initial value of parameters for `curve_fit()`. @@ -46,7 +46,8 @@ julia> param = fit.param 2.0735 ``` -`LsqFit.jl` also provides functions to examinep0 = [0.5, 0.5] the goodness of fit. `estimate_covar(fit)` computes the estimated covariance matrix. +`LsqFit.jl` also provides functions to examine the goodness of fit. +`estimate_covar(fit)` computes the estimated covariance matrix. ```Julia julia> cov = estimate_covar(fit) diff --git a/docs/src/tutorial.md b/docs/src/tutorial.md index f5df433..b107fbf 100644 --- a/docs/src/tutorial.md +++ b/docs/src/tutorial.md @@ -31,9 +31,9 @@ Y_i = \gamma_1 \exp(\gamma_2 t_i) + \epsilon_i To fit data using `LsqFit.jl`, pass the defined model function (`m`), data (`tdata` and `ydata`) and the initial parameter value (`p0`) to `curve_fit()`. For now, `LsqFit.jl` only supports the Levenberg Marquardt algorithm. ```julia -julia> # t: array of independent variables +julia> # t: independent variable julia> # p: array of model parameters -julia> m(t, p) = p[1] * exp.(p[2] * t) +julia> m(t, p) = p[1] * exp(p[2] * t) julia> p0 = [0.5, 0.5] julia> fit = curve_fit(m, tdata, ydata, p0) ``` @@ -80,9 +80,9 @@ By default, the finite differences is used (see [NLSolversBase.jl](https://githu ```Julia function j_m(t,p) - J = Array{Float64}(length(t),length(p)) - J[:,1] = exp.(p[2] .* t) #df/dp[1] - J[:,2] = t .* p[1] .* J[:,1] #df/dp[2] + J = Array{Float64}(length(p)) + J[1] = exp(p[2] * t) #df/dp[1] + J[2] = t * p[1] * J[1] #df/dp[2] J end @@ -267,7 +267,7 @@ The algorithm in `LsqFit.jl` will then provide a least squares solution $\boldsy In `LsqFit.jl`, the residual function passed to `levenberg_marquardt()` is in different format, if the weight is a vector: ```julia - r(p) = sqrt.(wt) .* ( model(xpts, p) - ydata ) + r(p) = sqrt.(wt) .* ( model.(xpts, Ref(p)) - ydata ) lmfit(r, g, p0, wt; kwargs...) ``` diff --git a/src/curve_fit.jl b/src/curve_fit.jl index d4dacac..2e0151d 100755 --- a/src/curve_fit.jl +++ b/src/curve_fit.jl @@ -16,14 +16,17 @@ StatsBase.residuals(lfr::LsqFitResult) = lfr.resid mse(lfr::LsqFitResult) = rss(lfr)/dof(lfr) function check_data_health(xdata, ydata) - if any(ismissing, xdata) || any(ismissing, ydata) - error("Data contains `missing` values and a fit cannot be performed") + if is_unhealthy(xdata) + error("x data contains `missing`, `Inf` or `NaN` values and a fit cannot be performed") end - if any(isinf, xdata) || any(isinf, ydata) || any(isnan, xdata) || any(isnan, ydata) - error("Data contains `Inf` or `NaN` values and a fit cannot be performed") + if is_unhealthy(ydata) + error("y data contains `missing`, `Inf` or `NaN` values and a fit cannot be performed") end end +is_unhealthy(x) = ismissing(x) || isinf(x) || isnan(x) +is_unhealthy(x::AbstractArray) = any(is_unhealthy, x) + # provide a method for those who have their own Jacobian function function lmfit(f, g, p0::AbstractArray, wt::AbstractArray; kwargs...) r = f(p0) @@ -47,7 +50,7 @@ function lmfit(f, p0::AbstractArray, wt::AbstractArray; autodiff = :finite, kwar # this is a convenience function for the curve_fit() methods # which assume f(p) is the cost functionj i.e. the residual of a # model where - # model(xpts, params...) = ydata + error (noise) + # model(x, params...) = y + error (noise) # this minimizes f(p) using a least squares sum of squared error: # rss = sum(f(p)^2) @@ -97,7 +100,7 @@ model(x, p) = p[1]*exp.(-x.*p[2]) # xdata: independent variables # ydata: dependent variable xdata = range(0, stop=10, length=20) -ydata = model(xdata, [1.0 2.0]) + 0.01*randn(length(xdata)) +ydata = model.(xdata, Ref([1.0 2.0])) .+ 0.01*randn(length(xdata)) p0 = [0.5, 0.5] fit = curve_fit(model, xdata, ydata, p0) @@ -114,7 +117,7 @@ function curve_fit(model, xdata::AbstractArray, ydata::AbstractArray, p0::Abstra f! = (F,p) -> (model(F,xdata,p); @. F = F - ydata) lmfit(f!, p0, T[], ydata; kwargs...) else - f = (p) -> model(xdata, p) - ydata + f = (p) -> model.(xdata, Ref(p)) - ydata lmfit(f, p0, T[]; kwargs...) end end @@ -130,8 +133,8 @@ function curve_fit(model, jacobian_model, g! = (G,p) -> jacobian_model(G, xdata, p) lmfit(f!, g!, p0, T[], copy(ydata); kwargs...) else - f = (p) -> model(xdata, p) - ydata - g = (p) -> jacobian_model(xdata, p) + f = (p) -> model.(xdata, Ref(p)) - ydata + g = (p) -> vcat(reshape.(jacobian_model.(xdata, Ref(p)), 1, :)...) lmfit(f, g, p0, T[]; kwargs...) end end @@ -146,7 +149,7 @@ function curve_fit(model, xdata::AbstractArray, ydata::AbstractArray, wt::Abstra f! = (F,p) -> (model(F,xdata,p); @. F = u*(F - ydata)) lmfit(f!, p0, wt, ydata; kwargs...) else - f = (p) -> u .* ( model(xdata, p) - ydata ) + f = (p) -> u .* ( model.(xdata, Ref(p)) - ydata ) lmfit(f,p0,wt; kwargs...) end end @@ -161,8 +164,8 @@ function curve_fit(model, jacobian_model, g! = (G,p) -> (jacobian_model(G, xdata, p); @. G = u*G ) lmfit(f!, g!, p0, wt, ydata; kwargs...) else - f = (p) -> u .* ( model(xdata, p) - ydata ) - g = (p) -> u .* ( jacobian_model(xdata, p) ) + f = (p) -> u .* ( model.(xdata, Ref(p)) - ydata ) + g = (p) -> u .* ( vcat(reshape.(jacobian_model.(xdata, Ref(p)), 1, :)...) ) lmfit(f, g, p0, wt; kwargs...) end end @@ -179,7 +182,7 @@ function curve_fit(model, xdata::AbstractArray, ydata::AbstractArray, wt::Abstra # This requires the matrix to be positive definite u = cholesky(wt).U - f(p) = u * ( model(xdata, p) - ydata ) + f(p) = u * ( model.(xdata, Ref(p)) - ydata ) lmfit(f,p0,wt; kwargs...) end @@ -189,8 +192,8 @@ function curve_fit(model, jacobian_model, u = cholesky(wt).U - f(p) = u * ( model(xdata, p) - ydata ) - g(p) = u * ( jacobian_model(xdata, p) ) + f(p) = u * ( model.(xdata, Ref(p)) - ydata ) + g(p) = u * ( vcat(reshape.(jacobian_model.(xdata, Ref(p)), 1, :)...) ) lmfit(f, g, p0, wt; kwargs...) end diff --git a/test/curve_fit.jl b/test/curve_fit.jl index bb5516c..b46689f 100755 --- a/test/curve_fit.jl +++ b/test/curve_fit.jl @@ -1,63 +1,62 @@ -let +@testset "curve_fit" begin # before testing the model, check whether missing/null data is rejected tdata = [rand(1:10, 5)..., missing] - @test_throws ErrorException("Data contains `missing` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) + @test_throws ErrorException("x data contains `missing`, `Inf` or `NaN` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) tdata = [rand(1:10, 5)..., Inf] - @test_throws ErrorException("Data contains `Inf` or `NaN` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) + @test_throws ErrorException("x data contains `missing`, `Inf` or `NaN` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) tdata = [rand(1:10, 5)..., NaN] - @test_throws ErrorException("Data contains `Inf` or `NaN` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) - + @test_throws ErrorException("x data contains `missing`, `Inf` or `NaN` values and a fit cannot be performed") LsqFit.check_data_health(tdata, tdata) + # fitting noisy data to an exponential model - # TODO: Change to `.-x` when 0.5 support is dropped - model(x, p) = p[1] .* exp.(-x .* p[2]) + model(x, p) = p[1] * exp(-x * p[2]) # some example data Random.seed!(12345) xdata = range(0, stop=10, length=20) - ydata = model(xdata, [1.0, 2.0]) + 0.01*randn(length(xdata)) + ydata = model.(xdata, Ref([1.0, 2.0])) + 0.01*randn(length(xdata)) p0 = [0.5, 0.5] for ad in (:finite, :forward, :forwarddiff) fit = curve_fit(model, xdata, ydata, p0; autodiff = ad) - @assert norm(fit.param - [1.0, 2.0]) < 0.05 + @test norm(fit.param - [1.0, 2.0]) < 0.05 @test fit.converged # can also get error estimates on the fit parameters errors = margin_error(fit, 0.1) - @assert norm(errors - [0.017, 0.075]) < 0.01 + @test norm(errors - [0.017, 0.075]) < 0.015 end # if your model is differentiable, it can be faster and/or more accurate # to supply your own jacobian instead of using the finite difference function jacobian_model(x,p) - J = Array{Float64}(undef, length(x), length(p)) - J[:,1] = exp.(-x.*p[2]) #dmodel/dp[1] - J[:,2] = -x.*p[1].*J[:,1] #dmodel/dp[2] + J = Array{Float64}(undef, length(p)) + J[1] = exp(-x*p[2]) #dmodel/dp[1] + J[2] = -x*p[1]*J[1] #dmodel/dp[2] J end jacobian_fit = curve_fit(model, jacobian_model, xdata, ydata, p0) - @assert norm(jacobian_fit.param - [1.0, 2.0]) < 0.05 + @test norm(jacobian_fit.param - [1.0, 2.0]) < 0.05 @test jacobian_fit.converged # some example data yvars = 1e-6*rand(length(xdata)) - ydata = model(xdata, [1.0, 2.0]) + sqrt.(yvars) .* randn(length(xdata)) + ydata = model.(xdata, Ref([1.0, 2.0])) + sqrt.(yvars) .* randn(length(xdata)) fit = curve_fit(model, xdata, ydata, 1 ./ yvars, [0.5, 0.5]) println("norm(fit.param - [1.0, 2.0]) < 0.05 ? ", norm(fit.param - [1.0, 2.0])) - @assert norm(fit.param - [1.0, 2.0]) < 0.05 + @test norm(fit.param - [1.0, 2.0]) < 0.05 @test fit.converged # can also get error estimates on the fit parameters errors = margin_error(fit, 0.1) println("norm(errors - [0.017, 0.075]) < 0.1 ?", norm(errors - [0.017, 0.075])) - @assert norm(errors - [0.017, 0.075]) < 0.1 + @test norm(errors - [0.017, 0.075]) < 0.1 # test with user-supplied jacobian and weights fit = curve_fit(model, jacobian_model, xdata, ydata, 1 ./ yvars, p0) println("norm(fit.param - [1.0, 2.0]) < 0.05 ? ", norm(fit.param - [1.0, 2.0])) - @assert norm(fit.param - [1.0, 2.0]) < 0.05 + @test norm(fit.param - [1.0, 2.0]) < 0.05 @test fit.converged - + # Parameters can also be inferred using arbitrary precision fit = curve_fit(model, xdata, ydata, 1 ./ yvars, BigFloat.(p0); x_tol=1e-20, g_tol=1e-20) @test fit.converged diff --git a/test/curve_fit_inplace.jl b/test/curve_fit_inplace.jl index e74986c..5c9ec82 100644 --- a/test/curve_fit_inplace.jl +++ b/test/curve_fit_inplace.jl @@ -1,21 +1,21 @@ -let +@testset "curve_fit_inplace" begin # fitting noisy data to an exponential model # TODO: Change to `.-x` when 0.5 support is dropped - @. model(x, p) = p[1] * exp(-x * p[2]) + model(x, p) = p[1] * exp(-x * p[2]) model_inplace(F, x, p) = (@. F = p[1] * exp(-x * p[2])) # some example data Random.seed!(12345) xdata = range(0, stop=10, length=500000) - ydata = model(xdata, [1.0, 2.0]) + 0.01*randn(length(xdata)) + ydata = model.(xdata, Ref([1.0, 2.0])) + 0.01*randn(length(xdata)) p0 = [0.5, 0.5] # if your model is differentiable, it can be faster and/or more accurate # to supply your own jacobian instead of using the finite difference function jacobian_model(x,p) - J = Array{Float64}(undef, length(x), length(p)) - @. J[:,1] = exp(-x*p[2]) #dmodel/dp[1] - @. @views J[:,2] = -x*p[1]*J[:,1] + J = Array{Float64}(undef, length(p)) + J[1] = exp(-x*p[2]) #dmodel/dp[1] + J[2] = -x*p[1]*J[1] J end @@ -25,8 +25,8 @@ let end - f(p) = model(xdata, p) - ydata - g(p) = jacobian_model(xdata, p) + f(p) = model.(xdata, Ref(p)) - ydata + g(p) = vcat(reshape.(jacobian_model.(xdata, Ref(p)), 1, :)...) df = OnceDifferentiable(f, g, p0, similar(ydata); inplace = false); evalf(x) = NLSolversBase.value!!(df, x) evalg(x) = NLSolversBase.jacobian!!(df, x) @@ -42,8 +42,8 @@ let j_inplace = evalg_inplace(p0); - @test r == r_inplace - @test j == j_inplace + @test r ≈ r_inplace + @test j ≈ j_inplace println("--------------\nPerformance of non-inplace") println("\t Evaluation function") @@ -91,7 +91,7 @@ let fit_inplace = @time curve_fit(model_inplace, xdata, ydata, p0; inplace = true, maxIter=100) @test fit_inplace.converged - @test fit_inplace.param == fit.param + @test fit_inplace.param ≈ fit.param println("\t Non-inplace with jacobian") @@ -102,14 +102,14 @@ let fit_inplace_jac = @time curve_fit(model_inplace, jacobian_model_inplace, xdata, ydata, p0; inplace = true, maxIter=100) @test fit_inplace_jac.converged - @test fit_jac.param == fit_inplace_jac.param + @test fit_jac.param ≈ fit_inplace_jac.param # some example data yvars = 1e-6*rand(length(xdata)) - ydata = model(xdata, [1.0, 2.0]) + sqrt.(yvars) .* randn(length(xdata)) + ydata = model.(xdata, Ref([1.0, 2.0])) + sqrt.(yvars) .* randn(length(xdata)) println("--------------\nPerformance of curve_fit with weights") diff --git a/test/geodesic.jl b/test/geodesic.jl index c8fa0dd..df52864 100644 --- a/test/geodesic.jl +++ b/test/geodesic.jl @@ -1,5 +1,5 @@ -let - +@testset "geodesic" begin + # fitting noisy data to an exponential model model(x, p) = @. p[1] * exp(-x * p[2]) @@ -13,7 +13,7 @@ let function jacobian_model(x,p) J = Array{Float64}(undef, length(x), length(p)) @. J[:,1] = exp(-x*p[2]) #dmodel/dp[1] - @. @views J[:,2] = -x*p[1]*J[:,1] + @. @views J[:,2] = -x*p[1]*J[:,1] J end @@ -23,8 +23,8 @@ let hessians = Array{Float64}(undef, length(xdata)*length(p0), length(p0)) h! = make_hessian(model_inplace, xdata, p0) auto_avv! = Avv(h!, length(p0), length(xdata)) - - + + # a couple notes on the Avv function: # - the basic idea is to see the model output as simply a collection of functions: f1...fm @@ -40,13 +40,13 @@ let #h21 = h12 h22 = (xdata[i]^2 * p[1] * exp(-xdata[i] * p[2])) - # manually compute v'Hv. This whole process might seem cumbersome, but - # allocating temporary matrices quickly becomes REALLY expensive and might even - # render the use of geodesic acceleration terribly inefficient + # manually compute v'Hv. This whole process might seem cumbersome, but + # allocating temporary matrices quickly becomes REALLY expensive and might even + # render the use of geodesic acceleration terribly inefficient dir_deriv[i] = h11*v1^2 + 2*h12*v1*v2 + h22*v2^2 end - end + end curve_fit(model, jacobian_model, xdata, ydata, p0; maxIter=1); #warmup @@ -64,7 +64,7 @@ let fit_geo = @time curve_fit(model, jacobian_model, xdata, ydata, p0; maxIter=10, avv! = manual_avv!,lambda=0, min_step_quality = 0) @test fit_geo.converged - + println("\t Geodesic - auto avv!") fit_geo_auto = @time curve_fit(model, jacobian_model, xdata, ydata, p0; maxIter=10, avv! = auto_avv!,lambda=0, min_step_quality = 0) @test fit_geo_auto.converged diff --git a/test/levenberg_marquardt.jl b/test/levenberg_marquardt.jl index 2abbd22..7b99360 100644 --- a/test/levenberg_marquardt.jl +++ b/test/levenberg_marquardt.jl @@ -10,7 +10,7 @@ R1 = OnceDifferentiable(f_lm, g_lm, zeros(2), zeros(2); inplace = false) results = LsqFit.levenberg_marquardt(R1, initial_x) - @assert norm(OptimBase.minimizer(results) - [0.0, 2.0]) < 0.01 + @test norm(OptimBase.minimizer(results) - [0.0, 2.0]) < 0.01 function rosenbrock_res(r, x) @@ -34,7 +34,7 @@ results = LsqFit.levenberg_marquardt(R2, initial_xrb) - @assert norm(OptimBase.minimizer(results) - [1.0, 1.0]) < 0.01 + @test norm(OptimBase.minimizer(results) - [1.0, 1.0]) < 0.01 # check estimate is within the bound PR #278 result = LsqFit.levenberg_marquardt(R2, [150.0, 150.0]; lower = [10.0, 10.0], upper = [200.0, 200.0]) @@ -60,7 +60,7 @@ R3 = OnceDifferentiable(f_lsq, g_lsq, zeros(2), zeros(_nobs); inplace = false) results = LsqFit.levenberg_marquardt(R3, [0.5, 0.5]) - @assert norm(OptimBase.minimizer(results) - [1.0, 2.0]) < 0.05 + @test norm(OptimBase.minimizer(results) - [1.0, 2.0]) < 0.05 end let