summaryrefslogtreecommitdiff
path: root/src/intel/vulkan/anv_pipeline.c
AgeCommit message (Collapse)AuthorFilesLines
2019-07-18anv: fix alphaToCoverage when there is no color attachmentSamuel Iglesias Gonsálvez1-10/+33
There are tests in CTS for alpha to coverage without a color attachment that are failing. This happens because we remove the shader color outputs when we don't have a valid color attachment for them, but when alpha to coverage is enabled we still want to preserve the the output at location 0 since we need the alpha component. In that case we will also need to create a null render target for RT 0. v2: - We already create a null rt when we don't have any, so reuse that for this case (Jason) - Simplify the code a bit (Iago) v3: - Take alpha to coverage from the key and don't tie this to depth-only rendering only, we want the same behavior if we have multiple render targets but the one at location 0 is not used. (Jason). - Rewrite commit message (Iago) v4: - Make sure we take into account the array length of the shader outputs, which we were no handling correctly either and make sure we also create null render targets for any invalid array entries too. v5: - Simplify removal of unused outputs by using rt_used[] so we don't have to special case alpha to coverage there too. Fixes the following CTS tests: dEQP-VK.pipeline.multisample.alpha_to_coverage_no_color_attachment.* Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com> Signed-off-by: Iago Toral Quiroga <itoral@igalia.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (cherry picked from commit bc66cebc0df0a7858264c7a6da96f60cdc5c8292)
2019-05-21anv: Only consider minSampleShading when sampleShadingEnable is setJason Ekstrand1-2/+2
From the Vulkan 1.1.107 spec: Sample shading is enabled for a graphics pipeline: - If the interface of the fragment shader entry point of the graphics pipeline includes an input variable decorated with SampleId or SamplePosition. In this case minSampleShadingFactor takes the value 1.0. - Else if the sampleShadingEnable member of the VkPipelineMultisampleStateCreateInfo structure specified when creating the graphics pipeline is set to VK_TRUE. In this case minSampleShadingFactor takes the value of VkPipelineMultisampleStateCreateInfo::minSampleShading. Otherwise, sample shading is considered disabled. In other words, if sampleShadingEnable is set to VK_FALSE, we should ignore minSampleShading. Cc: mesa-stable@lists.freedesktop.org Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> (cherry picked from commit 1c92358bd89313b0cf7bf7b84992a28f11b2aa8f)
2019-04-30anv: enable descriptor indexing capabilitiesJuan A. Suarez Romero1-0/+2
This enables the remaining capabilities in SPV_EXT_descriptor_indexing. Fixes: 6e230d7607f "anv: Implement VK_EXT_descriptor_indexing" Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-04-19anv/nir: Add a central helper for figuring out SSBO address formatsJason Ekstrand1-19/+5
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19anv: Implement VK_EXT_descriptor_indexingJason Ekstrand1-0/+9
Now that everything is in place to do bindless for all resource types except input attachments and UBOs, VK_EXT_descriptor_indexing is "trivial". Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19anv: Implement VK_KHR_shader_atomic_int64Jason Ekstrand1-0/+1
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19anv: Implement SSBOs bindings with GPU addresses in the descriptor BOJason Ekstrand1-6/+26
This commit adds a new way for ANV to do SSBO bindings by just passing a GPU address in through the descriptor buffer and using the A64 messages to access the GPU address directly. This means that our variable pointers are now "real" pointers instead of a vec2(BTI, offset) pair. This carries a few of advantages: 1. It lets us support a virtually unbounded number of SSBO bindings. 2. It lets us implement VK_KHR_shader_atomic_int64 which we couldn't implement before because those atomic messages are only available in the bindless A64 form. 3. It's way better than messing around with bindless handles for SSBOs which is the only other option for VK_EXT_descriptor_indexing. 4. It's more future looking, maybe? At the least, this is what NVIDIA does (they don't have binding based SSBOs at all). This doesn't a priori mean it's better, it just means it's probably not terrible. The big disadvantage, of course, is that we have to start doing our own bounds checking for robustBufferAccess again have to push in dynamic offsets. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19anv: Add a has_a64_buffer_access to anv_physical_deviceJason Ekstrand1-2/+1
This is more descriptive and a bit nicer than checking for gen >= 8 && use_softpin everywhere. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-18anv/pipeline: support Float16 and Int8 SPIR-V capabilities in gen8+Iago Toral Quiroga1-0/+2
v2: - Merge Float16 and Int8 capabilities into a single patch (Jason) - Merged patch that enabled SPIR-V front-end checks for these caps (except for Int8, which was already merged) v3: - Keep capabilities sorted (Jason) v4: - SpvCapabilityFloat16 support already added in master (Juan) Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
2019-04-08anv: Implement VK_NV_compute_shader_derivativesCaio Marcelo de Oliveira Filho1-0/+1
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-03-25i965,iris,anv: Make alpha to coverage work with sample maskDanylo Piliaiev1-2/+9
From "Alpha Coverage" section of SKL PRM Volume 7: "If Pixel Shader outputs oMask, AlphaToCoverage is disabled in hardware, regardless of the state setting for this feature." From OpenGL spec 4.6, "15.2 Shader Execution": "The built-in integer array gl_SampleMask can be used to change the sample coverage for a fragment from within the shader." From OpenGL spec 4.6, "17.3.1 Alpha To Coverage": "If SAMPLE_ALPHA_TO_COVERAGE is enabled, a temporary coverage value is generated where each bit is determined by the alpha value at the corresponding sample location. The temporary coverage value is then ANDed with the fragment coverage value to generate a new fragment coverage value." Similar wording could be found in Vulkan spec 1.1.100 "25.6. Multisample Coverage" Thus we need to compute alpha to coverage dithering manually in shader and replace sample mask store with the bitwise-AND of sample mask and alpha to coverage dithering. The following formula is used to compute final sample mask: m = int(16.0 * clamp(src0_alpha, 0.0, 1.0)) dither_mask = 0x1111 * ((0xfea80 >> (m & ~3)) & 0xf) | 0x0808 * (m & 2) | 0x0100 * (m & 1) sample_mask = sample_mask & dither_mask Credits to Francisco Jerez <currojerez@riseup.net> for creating it. It gives a number of ones proportional to the alpha for 2, 4, 8 or 16 least significant bits of the result. GEN6 hardware does not have issue with simultaneous usage of sample mask and alpha to coverage however due to the wrong sending order of oMask and src0_alpha it is still affected by it. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109743 Signed-off-by: Danylo Piliaiev <danylo.piliaiev@globallogic.com> Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2019-03-22spirv,nir: lower frexp_exp/frexp_sig inside a new NIR passSamuel Pitoiset1-0/+2
This lowering isn't needed for RADV because AMDGCN has two instructions. It will be disabled for RADV in an upcoming series. While we are at it, factorize a little bit. Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-03-20anv: implement VK_EXT_pipeline_creation_feedbackLionel Landwerlin1-3/+84
An extension reporting cache hit in the user supplied pipeline cache as well as timing information for creating the pipelines & stages. v2: Don't consider no cache for cache hits (Jason) Rework duration accumulation (Jason) v3: Fold feedback creation writing into pipeline compile functions (Jason/Lionel) v4: Get cache hit information from anv_device_search_for_kernel() (Jason) Only set cache hit from the whole pipeline if all stages also have that bit (Lionel) v5: Always user_cache_hit in anv_device_search_for_kernel() (Jason) Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-03-15nir: Rename nir_address_format_vk_index_offset to not be vkJason Ekstrand1-1/+1
It's just a 32-bit index and offset. We're going to want to use it in GL as well so stop talking about Vulkan. Reviewed-by: Kristian H. Kristensen <hoegsberg@chromium.org> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-03-11anv: add support for dumping shader info via VK_EXT_debug_reportTimothy Arceri1-11/+17
This information will be used by the vkpipeline-db tool. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-03-08anv/pipeline: Move lower_explicit_io much laterJason Ekstrand1-3/+5
Now that nir_opt_copy_prop_vars can properly handle array derefs on vectors, it's safe to move UBO and SSBO lowering to late in the pipeline. This should allow NIR to actually start optimizing SSBO access. Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-03-06nir/lower_doubles: Inline functions directly in lower_doublesJason Ekstrand1-1/+1
Instead of trusting the caller to already have created a softfp64 function shader and added all its functions to our shader, we simply take the softfp64 shader as an argument and do the function inlining ouselves. This means that there's no more nasty functions lying around that the caller needs to worry about cleaning up. Reviewed-by: Matt Turner <mattst88@gmail.com> Reviewed-by: Jordan Justen <jordan.l.justen@intel.com> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2019-03-04anv/pipeline: Drop anv_fill_binding_tableJason Ekstrand1-26/+0
We zero out the prog data anyway and, now that bias is always zero, this function is accomplishing nothing. Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-03-04anv: Use an actual binding for gl_NumWorkgroupsJason Ekstrand1-1/+7
This commit moves our handling of gl_NumWorkgroups over to work like our handling of other special bindings in the Vulkan driver. We give it a magic descriptor set number and teach emit_binding_tables to handle it. This is better than the bias mechanism we were using because it allows us to do proper accounting through the bind map mechanism. Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-02-20anv: implement VK_EXT_depth_clip_enableLionel Landwerlin1-0/+10
A new extension allowing the user to explictly specify the clipping behavior. Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-02-01anv: Implement VK_EXT_buffer_device_addressJason Ekstrand1-0/+6
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-22anv: Implement the basic form of VK_EXT_transform_feedbackJason Ekstrand1-1/+10
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-22anv: Add pipeline cache support for xfb_infoJason Ekstrand1-2/+2
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-21anv/pipeline: Add a pdevice helper variableJason Ekstrand1-9/+10
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2019-01-19nir: rename nir_var_ssbo to nir_var_mem_ssboKarol Herbst1-1/+1
Signed-off-by: Karol Herbst <kherbst@redhat.com> Acked-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Eric Anholt <eric@anholt.net> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-19nir: rename nir_var_ubo to nir_var_mem_uboKarol Herbst1-1/+1
Signed-off-by: Karol Herbst <kherbst@redhat.com> Acked-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Eric Anholt <eric@anholt.net> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-19nir: rename nir_var_function to nir_var_function_tempKarol Herbst1-2/+2
Signed-off-by: Karol Herbst <kherbst@redhat.com> Acked-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Eric Anholt <eric@anholt.net> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-10anv/pipeline: Cache the pre-lowered NIRJason Ekstrand1-10/+39
This adds a second level of caching for the pre-lowered NIR that's only based off of the shader module, entrypoint and specialization constants. This is enough for spirv_to_nir as well as our first round of lowering and optimization. Caching at this level should allow for faster shader recompiles due to state changes. The NIR caching does not get serialized to disk via either the VkPipelineCache serialization mechanism or the transparent on-disk cache. We could but it's usually not that expensive to fall back to SPIR-V for the odd cache miss especially if it only happens once for several misses and it simplifies the cache. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-10anv/pipeline: Hash shader modules and spec constants separatelyJason Ekstrand1-15/+39
The stuff hashed by anv_pipeline_hash_shader is exactly the inputs to anv_shader_compile_to_nir so it can be used for NIR caching. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-10anv/pipeline: Move wpos and input attachment lowering to lower_nirJason Ekstrand1-11/+8
This lets us make anv_pipeline_compile_to_nir take a device instead of a pipeline. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-10anv/pipeline: Constant fold after apply_pipeline_layoutJason Ekstrand1-0/+1
Thanks to the new NIR load_descriptor intrinsic added by the UBO/SSBO lowering series, we weren't getting UBO pushing because the UBO range detection pass couldn't see the constants it needed. This fixes that problem with a quick round of constant folding. Because we're folding we no longer need to go out of our way to generate constants when we lower the vulkan_resource_index intrinsic and we can make it a bit simpler. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-01-08nir: rename global/local to private/function memoryKarol Herbst1-2/+2
the naming is a bit confusing no matter how you look at it. Within SPIR-V "global" memory is memory accessible from all threads. glsl "global" memory normally refers to shader thread private memory declared at global scope. As we already use "shared" for memory shared across all thrads of a work group the solution where everybody could be happy with is to rename "global" to "private" and use "global" later for memory usually stored within system accessible memory (be it VRAM or system RAM if keeping SVM in mind). glsl "local" memory is memory only accessible within a function, while SPIR-V "local" memory is memory accessible within the same workgroup. v2: rename local to function as well v3: rename vtn_variable_mode_local as well Signed-off-by: Karol Herbst <kherbst@redhat.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-01-07spirv: Sort supported capabilitiesJason Ekstrand1-9/+9
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-01-08anv: Enable the new deref-based UBO/SSBO pathJason Ekstrand1-1/+3
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-01-08spirv: Add support for using derefs for UBO/SSBO accessJason Ekstrand1-0/+1
For now, it's hidden behind a cap. Hopefully, we can eventually drop that along with all the manual offset code in spirv_to_nir. Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com> Tested-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
2019-01-08spirv: Add explicit pointer typesJason Ekstrand1-0/+4
Instead of baking in uvec2 for UBO and SSBO pointers and uint for push constant and shared memory pointers, make it configurable. Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-01-08nir: Move propagation of cast derefs to a new nir_opt_deref passJason Ekstrand1-1/+1
We're going to want to do more deref optimizations going forward and this gives us a central place to do them. Also, cast propagation will get a bit more complicated with the addition of ptr_as_array derefs. Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com> Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2018-12-11anv: Advertise support for MinLod on Skylake+Jason Ekstrand1-0/+1
These are usually used for dealing with sparse resources but there's no reason why we can't hook them up before we have sparse. We have the hardware; let's light it up. Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
2018-11-22anv/nir: Rework arguments to apply_pipeline_layoutJason Ekstrand1-1/+3
Instead of taking a whole pipeline (which could be anything!), just take a physical device and robust_buffer_access boolean. This makes it easier to verify that only the things in the hash actually affect pipeline compilation. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2018-11-22anv: Put robust buffer access in the pipeline hashJason Ekstrand1-0/+6
It affects apply_pipeline_layout. Shaders compiled with the wrong value will work but they may not be robust as requested by the app. Cc: mesa-stable@lists.freedesktop.org Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2018-10-30intel/compiler: Stop assuming the entrypoint is called "main"Jason Ekstrand1-1/+0
This isn't true for Vulkan so we have to whack it to "main" in anv which is silly. Instead of walking the list of functions and asserting that everything is named "main" and hoping there's only one function named "main", just use the nir_shader_get_entrypoint() helper which has better assertions anyway. Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2018-10-26nir/validate: Print when the validation failedJason Ekstrand1-1/+1
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> Reviewed-by: Eric Anholt <eric@anholt.net>
2018-09-07Replace uses of _mesa_bitcount with util_bitcountDylan Baker1-1/+1
and _mesa_bitcount_64 with util_bitcount_64. This fixes a build problem in nir for platforms that don't have popcount or popcountll, such as 32bit msvc. v2: - Fix additional uses of _mesa_bitcount added after this was originally written Acked-by: Eric Engestrom <eric.engestrom@intel.com> (v1) Acked-by: Eric Anholt <eric@anholt.net> Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
2018-08-29anv,i965: Lower away image derefs in the driverJason Ekstrand1-2/+2
Previously, the back-end compiler turn image access into magic uniform reads and there was a complex contract between back-end compiler and driver about setting up and filling out those params. As of this commit, both drivers now lower image_deref_load_param_intel intrinsics to load_uniform intrinsics controlled by the driver and lower the other image_deref_* intrinsics to image_* intrinsics which take an actual binding table index. There are still "magic" uniforms but they are now added and controlled entirely by the driver and that contract no longer spans components. This also has the side-effect of making most image use compile-time binding table indices. Previously, all image access pulled the binding table index from a uniform. Part of the reason for this was that the magic uniforms made it difficult to decouple binding table indices from the uniforms and, since they are indexed completely differently (especially in Vulkan), it was hard to pull them apart. Now that the driver is handling both, it's trivial to decouple the two and provide actual binding table indices. Shader-db results on Kaby Lake: total instructions in shared programs: 15166872 -> 15164293 (-0.02%) instructions in affected programs: 115834 -> 113255 (-2.23%) helped: 191 HURT: 0 total cycles in shared programs: 571311495 -> 571196465 (-0.02%) cycles in affected programs: 4757115 -> 4642085 (-2.42%) helped: 73 HURT: 67 total spills in shared programs: 10951 -> 10926 (-0.23%) spills in affected programs: 742 -> 717 (-3.37%) helped: 7 HURT: 0 total fills in shared programs: 22226 -> 22201 (-0.11%) fills in affected programs: 1146 -> 1121 (-2.18%) helped: 7 HURT: 0 Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2018-08-29intel/compiler: Do image load/store lowering to NIRJason Ekstrand1-0/+2
This commit moves our storage image format conversion codegen into NIR instead of doing it in the back-end. This has the advantage of letting us run it through NIR's optimizer which is pretty effective at shrinking things down. In the common case of rgba8, the number of instructions emitted after NIR is done with it is half of what it was with the lowering happening in the back-end. On the downside, the back-end's lowering is able to directly use predicates and the NIR lowering has to use IFs. Shader-db results on Kaby Lake: total instructions in shared programs: 15166910 -> 15166872 (<.01%) instructions in affected programs: 5895 -> 5857 (-0.64%) helped: 15 HURT: 0 Clearly, we don't have that much image_load_store happening in the shaders in shader-db.... Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2018-08-17anv/pipeline: Lower pipeline layouts etc. after linkingJason Ekstrand1-30/+28
This allows us to use the link-optimized shader for determining binding table layouts and, more importantly, URB layouts. For apps running on DXVK, this is extremely important as DXVK likes to declare max-size inputs and outputs and this lets is massively shrink our URB space requirements. VkPipeline-db results (Batman pipelines only) on KBL: total instructions in shared programs: 820403 -> 790008 (-3.70%) instructions in affected programs: 273759 -> 243364 (-11.10%) helped: 622 HURT: 42 total spills in shared programs: 8449 -> 5212 (-38.31%) spills in affected programs: 3427 -> 190 (-94.46%) helped: 607 HURT: 2 total fills in shared programs: 11638 -> 6067 (-47.87%) fills in affected programs: 5879 -> 308 (-94.76%) helped: 606 HURT: 3 Looking at shaders by hand, it makes the URB between TCS and TES go from containing 32 per-vertex varyings per tessellation shader pair to a more reasonable 8-12. For a 3-vertex patch, that's at least half the URB space no matter how big the patch section is. Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
2018-08-17anv/pipeline: Set tess IO read/written key fields in compile_*Jason Ekstrand1-9/+10
We want these to be set as close to the final compile as possible so that they are guaranteed to happen after nir_shader_gather_info is called. The next commit is going to move nir_shader_gather_info to after the linking step which makes this necessary. Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
2018-08-17anv/pipeline: Use more fields from stage in compile_csJason Ekstrand1-16/+21
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2018-08-09anv: set error in all failure pathsEric Engestrom1-1/+3
Cc: Jason Ekstrand <jason.ekstrand@intel.com> Fixes: 5b196f39bddc689742d3 "anv/pipeline: Compile to NIR in compile_graphics" Signed-off-by: Eric Engestrom <eric.engestrom@intel.com> Reviewed-by: Tapani Pälli <tapani.palli@intel.com> Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2018-08-03anv/pipeline: Disable FS dispatch for pointless fragment shadersJason Ekstrand1-4/+33
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>