Gradle User Manual

Version 4.5.1

Gradle build tool source code is open and licensed under the Apache License 2.0. Gradle user manual and DSL references are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Table of Contents

About Gradle
Introduction
Overview
Working with existing builds
Installing Gradle
Command-Line Interface
The Gradle Wrapper
The Gradle Daemon
Dependency Management for Java Projects
Executing Multi-Project Builds
Continuous build
Composite builds
Build Environment
Troubleshooting
Embedding Gradle using the Tooling API
Build Cache
Writing Gradle build scripts
Build Script Basics
Build Init Plugin
Writing Build Scripts
Authoring Tasks
Working With Files
Using Ant from Gradle
Build Lifecycle
Logging
Authoring Multi-Project Builds
Using Gradle Plugins
Standard Gradle plugins
The Project Report Plugin
The Build Dashboard Plugin
Comparing Builds
Publishing artifacts
The Maven Plugin
The Signing Plugin
Ivy Publishing (new)
Maven Publishing (new)
The Distribution Plugin
The Announce Plugin
The Build Announcements Plugin
Dependency management
Introduction to Dependency Management
Declaring Dependencies
Declaring Repositories
Inspecting Dependencies
Managing Transitive Dependencies
Working with Dependencies
Customizing Dependency Resolution Behavior
Troubleshooting Dependency Resolution
Extending the build
Writing Custom Task Classes
Writing Custom Plugins
Gradle Plugin Development Plugin
Organizing Build Logic
Lazy Configuration
Initialization Scripts
Testing Build Logic with TestKit
Building JVM projects
Java Quickstart
The Java Plugin
The Java Library Plugin
Web Application Quickstart
The War Plugin
The Ear Plugin
The Jetty Plugin
The Application Plugin
The Java Library Distribution Plugin
Groovy Quickstart
The Groovy Plugin
The Scala Plugin
The ANTLR Plugin
The Checkstyle Plugin
The CodeNarc Plugin
The FindBugs Plugin
The JDepend Plugin
The PMD Plugin
The JaCoCo Plugin
The OSGi Plugin
The Eclipse Plugins
The IDEA Plugin
The Software model
Rule based model configuration
Software model concepts
Implementing model rules in a plugin
Building Java Libraries
Building Play applications
Building native software
Extending the software model
Glossary
Dependency Types
Repository Types
Appendix
A. Gradle Samples
B. Potential Traps
C. The Feature Lifecycle
D. Documentation licenses

List of Examples

1. Excluding tasks
2. Abbreviated camel case task name
3. Obtaining detailed help for tasks
4. Information about properties
5. Running the Wrapper task
6. The generated distribution URL
7. Providing options to Wrapper task
8. The generated distribution URL
9. Executing the build with the Wrapper batch file
10. Upgrading the Wrapper version
11. Checking the Wrapper version after upgrading
12. Customizing the Wrapper task
13. The generated distribution URL
14. Specifying the HTTP Basic Authentication credentials using system properties
15. Specifying the HTTP Basic Authentication credentials in distributionUrl
16. Configuring SHA-256 checksum verification
17. Declaring dependencies
18. Definition of an external dependency
19. Shortcut definition of an external dependency
20. Usage of Maven central repository
21. Usage of JCenter repository
22. Usage of a remote Maven repository
23. Usage of a remote Ivy directory
24. Usage of a local Ivy directory
25. Publishing to an Ivy repository
26. Publishing to a Maven repository
27. Listing the projects in a build
28. Dependencies of my-app
29. Declaring a command-line composite
30. Declaring a separate composite
31. Depending on task from included build
32. Build that does not declare group attribute
33. Declaring the substitutions for an included build
34. Depending on a single task from an included build
35. Depending on a tasks with path in all included builds
36. Setting properties with a gradle.properties file
37. Specifying system properties in gradle.properties
38. Setting a project property via gradle.properties
39. Setting a project property via environment variable
40. Changing JVM settings for Gradle client JVM
41. Changing JVM settings for forked Gradle JVMs
42. Set Java compile options for JavaCompile tasks
43. Prevent releasing outside of CI
44. Configuring an HTTP proxy using gradle.properties
45. Configuring an HTTPS proxy using gradle.properties
46. Using the tooling API
47. Configure the local cache
48. Pull from HttpBuildCache
49. Configure remote HTTP cache
50. Allow untrusted SSL certificate for HttpBuildCache
51. Recommended setup for CI push use case
52. Consistent setup for buildSrc and main build
53. Init script to configure the build cache
54. Your first build script
55. Execution of a build script
56. A task definition shortcut
57. Using Groovy in Gradle's tasks
58. Using Groovy in Gradle's tasks
59. Declaration of task that depends on other task
60. Lazy dependsOn - the other task does not exist (yet)
61. Dynamic creation of a task
62. Accessing a task via API - adding a dependency
63. Accessing a task via API - adding behaviour
64. Accessing task as a property of the build script
65. Adding extra properties to a task
66. Using AntBuilder to execute ant.loadfile target
67. Using methods to organize your build logic
68. Defining a default task
69. Different outcomes of build depending on chosen tasks
70. Accessing property of the Project object
71. Using local variables
72. Using extra properties
73. Configuring arbitrary objects
74. Configuring arbitrary objects using a script
75. Groovy JDK methods
76. Property accessors
77. Method call without parentheses
78. List and map literals
79. Closure as method parameter
80. Closure delegates
81. Defining tasks
82. Defining tasks - using strings for task names
83. Defining tasks with alternative syntax
84. Accessing tasks as properties
85. Accessing tasks via tasks collection
86. Accessing tasks by path
87. Creating a copy task
88. Configuring a task - various ways
89. Configuring a task - with closure
90. Defining a task with closure
91. Adding dependency on task from another project
92. Adding dependency using task object
93. Adding dependency using closure
94. Adding a 'must run after' task ordering
95. Adding a 'should run after' task ordering
96. Task ordering does not imply task execution
97. A 'should run after' task ordering is ignored if it introduces an ordering cycle
98. Adding a description to a task
99. Overwriting a task
100. Skipping a task using a predicate
101. Skipping tasks with StopExecutionException
102. Enabling and disabling tasks
103. Custom task class
104. Ad-hoc task
105. Ad-hoc task declaring a destroyable
106. Using runtime API with custom task type
107. Using skipWhenEmpty() via the runtime API
108. Inferred task dependency via task outputs
109. Inferred task dependency via a task argument
110. Declaring a method to add task inputs
111. Declaring a method to add a task as an input
112. Failed attempt at setting up an inferred task dependency
113. Setting up an inferred task dependency between output dir and input files
114. Setting up an inferred task dependency with files()
115. Setting up an inferred task dependency with builtBy()
116. Ignoring up-to-date checks
117. Runtime classpath normalization
118. Task rule
119. Dependency on rule based tasks
120. Adding a task finalizer
121. Task finalizer for a failing task
122. Locating files
123. Creating a file collection
124. Using a file collection
125. Implementing a file collection
126. Creating a file tree
127. Using a file tree
128. Using an archive as a file tree
129. Specifying a set of files
130. Appending a set of files
131. Copying files using the copy task
132. Specifying copy task source files and destination directory
133. Selecting the files to copy
134. Copying files using the copy() method without up-to-date check
135. Copying files using the copy() method with up-to-date check
136. Renaming files as they are copied
137. Filtering files as they are copied
138. Nested copy specs
139. Using the Sync task to copy dependencies
140. Creating a ZIP archive
141. Creation of ZIP archive
142. Configuration of archive task - custom archive name
143. Configuration of archive task - appendix & classifier
144. Activating reproducible archives
145. Using an Ant task
146. Passing nested text to an Ant task
147. Passing nested elements to an Ant task
148. Using an Ant type
149. Using a custom Ant task
150. Declaring the classpath for a custom Ant task
151. Using a custom Ant task and dependency management together
152. Importing an Ant build
153. Task that depends on Ant target
154. Adding behaviour to an Ant target
155. Ant target that depends on Gradle task
156. Renaming imported Ant targets
157. Setting an Ant property
158. Getting an Ant property
159. Setting an Ant reference
160. Getting an Ant reference
161. Fine tuning Ant logging
162. Single project build
163. Hierarchical layout
164. Flat layout
165. Lookup of elements of the project tree
166. Modification of elements of the project tree
167. Adding of test task to each project which has certain property set
168. Notifications
169. Setting of certain property to all tasks
170. Logging of start and end of each task execution
171. Using stdout to write log messages
172. Writing your own log messages
173. Writing a log message with placeholder
174. Using SLF4J to write log messages
175. Configuring standard output capture
176. Configuring standard output capture for a task
177. Customizing what Gradle logs
178. Multi-project tree - water & bluewhale projects
179. Build script of water (parent) project
180. Multi-project tree - water, bluewhale & krill projects
181. Water project build script
182. Defining common behavior of all projects and subprojects
183. Defining specific behaviour for particular project
184. Defining specific behaviour for project krill
185. Adding custom behaviour to some projects (filtered by project name)
186. Adding custom behaviour to some projects (filtered by project properties)
187. Running build from subproject
188. Evaluation and execution of projects
189. Evaluation and execution of projects
190. Running tasks by their absolute path
191. Dependencies and execution order
192. Dependencies and execution order
193. Dependencies and execution order
194. Declaring dependencies
195. Declaring dependencies
196. Cross project task dependencies
197. Configuration time dependencies
198. Configuration time dependencies - evaluationDependsOn
199. Configuration time dependencies
200. Dependencies - real life example - crossproject configuration
201. Project lib dependencies
202. Project lib dependencies
203. Fine grained control over dependencies
204. Build and Test Single Project
205. Partial Build and Test Single Project
206. Build and Test Depended On Projects
207. Build and Test Dependent Projects
208. Applying a script plugin
209. Applying a core plugin
210. Applying a community plugin
211. Applying plugins only on certain subprojects.
212. Using plugins from custom plugin repositories.
213. Plugin resolution strategy.
214. Complete Plugin Publishing Sample
215. Applying a binary plugin
216. Applying a binary plugin by type
217. Applying a plugin with the buildscript block
218. Using the Build Dashboard plugin
219. Defining an artifact using an archive task
220. Defining an artifact using a file
221. Customizing an artifact
222. Map syntax for defining an artifact using a file
223. Configuration of the upload task
224. Using the Maven plugin
225. Creating a standalone pom.
226. Upload of file to remote Maven repository
227. Upload of file via SSH
228. Customization of pom
229. Builder style customization of pom
230. Modifying auto-generated content
231. Customization of Maven installer
232. Generation of multiple poms
233. Accessing a mapping configuration
234. Using the Signing plugin
235. Sign with GnuPG
236. Configure the GnupgSignatory
237. Signing a configuration
238. Signing a configuration output
239. Signing a task
240. Signing a task output
241. Conditional signing
242. Signing a POM for deployment
243. Applying the “ivy-publish” plugin
244. Publishing a Java module to Ivy
245. Publishing additional artifact to Ivy
246. customizing the publication identity
247. Customizing the module descriptor file
248. Publishing multiple modules from a single project
249. Declaring repositories to publish to
250. Choosing a particular publication to publish
251. Publishing all publications via the “publish” lifecycle task
252. Generating the Ivy module descriptor file
253. Publishing a Java module
254. Example generated ivy.xml
255. Applying the 'maven-publish' plugin
256. Adding a MavenPublication for a Java component
257. Adding additional artifact to a MavenPublication
258. customizing the publication identity
259. Modifying the POM file
260. Publishing multiple modules from a single project
261. Declaring repositories to publish to
262. Publishing a project to a Maven repository
263. Publish a project to the Maven local repository
264. Generate a POM file without publishing
265. Using the distribution plugin
266. Adding extra distributions
267. Configuring the main distribution
268. publish main distribution
269. Applying the announce plugin
270. Configure the announce plugin
271. Using the announce plugin
272. Using the build announcements plugin
273. Using the build announcements plugin from an init script
274. Declaring a binary dependencies with a concrete version
275. Declaring a binary dependencies with a dynamic version
276. Declaring a binary dependencies with a changing version
277. Declaring multiple file dependencies
278. Declaring project dependencies
279. Declaring and using a custom configuration
280. Resolving a JavaScript artifact for a declared dependency
281. Resolving a JavaScript artifact with classifier for a declared dependency
282. Declaring JCenter repository as source for resolving dependencies
283. Declaring a custom repository by URL
284. Declaring multiple repositories
285. Declaring the JGit dependency with a custom configuration
286. Rendering the dependency report for a custom configuration
287. Declaring the JGit dependency and a conflicting dependency
288. Using the dependency insight report for a given dependency
289. Unresolved artifacts for transitive dependencies
290. Excluding transitive dependency for a particular dependency declaration
291. Excluding transitive dependency for a particular configuration
292. Enforcing a dependency version
293. Enforcing a dependency version on the configuration-level
294. Disabling transitive dependency resolution for a declared dependency
295. Disabling transitive dependency resolution on the configuration-level
296. Configuration.copy
297. Accessing declared dependencies
298. Configuration.files
299. Configuration.files with spec
300. Configuration.copy
301. Configuration.copy vs. Configuration.files
302. Forcing consistent version for a group of libraries
303. Using a custom versioning scheme
304. Blacklisting a version with a replacement
305. Changing dependency group and/or name at the resolution
306. Substituting a module with a project
307. Substituting a project with a module
308. Conditionally substituting a dependency
309. Specifying default dependencies on a configuration
310. Enabling dynamic resolve mode
311. 'Latest' version selector
312. Custom status scheme
313. Custom status scheme by module
314. Ivy component metadata rule
315. Rule source component metadata rule
316. Component selection rule
317. Component selection rule with module target
318. Component selection rule with metadata
319. Component selection rule using a rule source object
320. Declaring module replacement
321. Dynamic version cache control
322. Changing module cache control
323. Defining a custom task
324. A hello world task
325. A customizable hello world task
326. A build for a custom task
327. A custom task
328. Using a custom task in another project
329. Testing a custom task
330. Defining an incremental task action
331. Running the incremental task for the first time
332. Running the incremental task with unchanged inputs
333. Running the incremental task with updated input files
334. Running the incremental task with an input file removed
335. Running the incremental task with an output file removed
336. Running the incremental task with an input property changed
337. Creating a unit of work implementation
338. Submitting a unit of work for execution
339. Waiting for asynchronous work to complete
340. Submitting an item of work to run in a worker daemon
341. A custom plugin
342. A custom plugin extension
343. A custom plugin with configuration closure
344. Evaluating file properties lazily
345. Mapping extension properties to task properties
346. A build for a custom plugin
347. Wiring for a custom plugin
348. Using a custom plugin in another project
349. Applying a community plugin with the plugins DSL
350. Testing a custom plugin
351. Using the Java Gradle Plugin Development plugin
352. Nested DSL elements
353. Managing a collection of objects
354. Using the Java Gradle Plugin Development plugin
355. Using the gradlePlugin {} block.
356. Using inherited properties and methods
357. Using injected properties and methods
358. Configuring the project using an external build script
359. Custom buildSrc build script
360. Adding subprojects to the root buildSrc project
361. Running another build from a build
362. Declaring external dependencies for the build script
363. A build script with external dependencies
364. Ant optional dependencies
365. Using a read-only and configurable property
366. Using file and directory property
367. Implicit task dependency
368. List property
369. Using init script to perform extra configuration before projects are evaluated
370. Declaring external dependencies for an init script
371. An init script with external dependencies
372. Using plugins in init scripts
373. Declaring the TestKit dependency
374. Declaring the JUnit dependency
375. Using GradleRunner with JUnit
376. Using GradleRunner with Spock
377. Making the code under test classpath available to the tests
378. Injecting the code under test classes into test builds
379. Injecting the code under test classes into test builds for Gradle versions prior to 2.8
380. Using the Java Gradle Development plugin for generating the plugin metadata
381. Automatically injecting the code under test classes into test builds
382. Reconfiguring the classpath generation conventions of the Java Gradle Development plugin
383. Specifying a Gradle version for test execution
384. Testing cacheable tasks
385. Using the Java plugin
386. Building a Java project
387. Adding Maven repository
388. Adding dependencies
389. Customization of MANIFEST.MF
390. Adding a test system property
391. Publishing the JAR file
392. Eclipse plugin
393. Java example - complete build file
394. Multi-project build - hierarchical layout
395. Multi-project build - settings.gradle file
396. Multi-project build - common configuration
397. Multi-project build - dependencies between projects
398. Multi-project build - distribution file
399. Using the Java plugin
400. Custom Java source layout
401. Accessing a source set
402. Configuring the source directories of a source set
403. Defining a source set
404. Defining source set dependencies
405. Compiling a source set
406. Assembling a JAR for a source set
407. Generating the Javadoc for a source set
408. Running tests in a source set
409. Declaring annotation processors
410. Filtering tests in the build script
411. JUnit Categories
412. Grouping TestNG tests
413. Preserving order of TestNG tests
414. Grouping TestNG tests by instances
415. Creating a unit test report for subprojects
416. Customization of MANIFEST.MF
417. Creating a manifest object.
418. Separate MANIFEST.MF for a particular archive
419. Saving a MANIFEST.MF to disk
420. Configure Java 6 build
421. Using the Java Library plugin
422. Declaring API and implementation dependencies
423. Making the difference between API and implementation
424. Declaring API and implementation dependencies
425. Configuring the Groovy plugin to work with Java Library
426. War plugin
427. Running web application with Gretty plugin
428. Using the War plugin
429. Customization of war plugin
430. Using the Ear plugin
431. Customization of ear plugin
432. Using the application plugin
433. Configure the application main class
434. Configure default JVM settings
435. Configure custom directory for start scripts
436. Include output from other tasks in the application distribution
437. Automatically creating files for distribution
438. Using the Java library distribution plugin
439. Configure the distribution name
440. Include files in the distribution
441. Groovy plugin
442. Dependency on Groovy
443. Groovy example - complete build file
444. Using the Groovy plugin
445. Custom Groovy source layout
446. Configuration of Groovy dependency
447. Configuration of Groovy test dependency
448. Configuration of bundled Groovy dependency
449. Configuration of Groovy file dependency
450. Configure Java 6 build for Groovy
451. Using the Scala plugin
452. Custom Scala source layout
453. Declaring a Scala dependency for production code
454. Declaring a Scala dependency for test code
455. Declaring a version of the Zinc compiler to use
456. Forcing a scala-library dependency for all configurations
457. Forcing a scala-library dependency for the zinc configuration
458. Adjusting memory settings
459. Forcing all code to be compiled
460. Configure Java 6 build for Scala
461. Explicitly specify a target IntelliJ IDEA version
462. Using the ANTLR plugin
463. Declare ANTLR version
464. setting custom max heap size and extra arguments for ANTLR
465. Using the Checkstyle plugin
466. Using the config_loc property
467. Customizing the HTML report
468. Using the CodeNarc plugin
469. Using the FindBugs plugin
470. Customizing the HTML report
471. Using the JDepend plugin
472. Using the PMD plugin
473. Applying the JaCoCo plugin
474. Configuring JaCoCo plugin settings
475. Configuring test task
476. Configuring violation rules
477. Configuring test task
478. Using application plugin to generate code coverage data
479. Coverage reports generated by applicationCodeCoverageReport
480. Using the OSGi plugin
481. Configuration of OSGi MANIFEST.MF file
482. Using the Eclipse plugin
483. Using the Eclipse WTP plugin
484. Partial Overwrite for Classpath
485. Partial Overwrite for Project
486. Export Classpath Entries
487. Customizing the XML
488. Using the IDEA plugin
489. Partial Rewrite for Module
490. Partial Rewrite for Project
491. Export Dependencies
492. Customizing the XML
493. applying a rule source plugin
494. a model creation rule
495. a model mutation rule
496. creating a task
497. a managed type
498. a String property
499. a File property
500. a Long property
501. a boolean property
502. an int property
503. a managed property
504. an enumeration type property
505. a managed set
506. a scalar collection
507. strongly modelling sources sets
508. a DSL example applying a rule to every element in a scope
509. DSL configuration rule
510. Configuration run when required
511. Configuration not run when not required
512. DSL creation rule
513. DSL creation rule without initialization
514. Initialization before configuration
515. Nested DSL creation rule
516. Nested DSL configuration rule
517. DSL configuration rule for each element in a map
518. Nested DSL property configuration
519. a DSL example showing type conversions
520. a DSL rule using inputs
521. model task output
522. Using the Java software plugins
523. Creating a java library
524. Configuring a source set
525. Creating a new source set
526. The components report
527. Declaring a dependency onto a library
528. Declaring a dependency onto a project with an explicit library
529. Declaring a dependency onto a project with an implicit library
530. Declaring a dependency onto a library published to a Maven repository
531. Declaring a module dependency using shorthand notation
532. Configuring repositories for dependency resolution
533. Specifying api packages
534. Specifying api dependencies
535. Main sources
536. Client component
537. Broken client component
538. Making non-API implementation-only change
539. Recompiling the client
540. Declaring target platforms
541. Declaring binary specific sources
542. Declaring target platforms
543. Using the JUnit plugin
544. Executing the test suite
545. Executing the test suite
546. Declaring a component under test
547. Declaring local Java installations
548. Using the Play plugin
549. The components report
550. Selecting a version of the Play Framework
551. Adding dependencies to a Play application
552. A Play 2.6 project
553. Adding Guice dependency in Play 2.6 project
554. Configuring extra source sets to a Play application
555. Adding extra source sets to a Play application
556. Configuring Scala compiler options
557. Configuring routes style
558. Configuring a custom asset pipeline
559. Configuring dependencies on Play subprojects
560. Add extra files to a Play application distribution
561. Applying both the Play and IDEA plugins
562. Defining a library component
563. Defining executable components
564. Sample build
565. Dependent components report
566. Dependent components report
567. Report of components that depends on the operators component
568. Report of components that depends on the operators component, including test suites
569. Assemble components that depends on the passing/static binary of the operators component
570. Build components that depends on the passing/static binary of the operators component
571. Adding a custom check task
572. Running checks for a given binary
573. The components report
574. The 'cpp' plugin
575. C++ source set
576. The 'c' plugin
577. C source set
578. The 'assembler' plugin
579. The 'objective-c' plugin
580. The 'objective-cpp' plugin
581. Settings that apply to all binaries
582. Settings that apply to all shared libraries
583. Settings that apply to all binaries produced for the 'main' executable component
584. Settings that apply only to shared libraries produced for the 'main' library component
585. The 'windows-resources' plugin
586. Configuring the location of Windows resource sources
587. Building a resource-only dll
588. Providing a library dependency to the source set
589. Providing a library dependency to the binary
590. Declaring project dependencies
591. Creating a precompiled header file
592. Including a precompiled header file in a source file
593. Configuring a precompiled header
594. Defining build types
595. Configuring debug binaries
596. Defining platforms
597. Defining flavors
598. Targeting a component at particular platforms
599. Building all possible variants
600. Defining tool chains
601. Reconfigure tool arguments
602. Defining target platforms
603. Registering CUnit tests
604. Configuring CUnit tests
605. Running CUnit tests
606. Registering GoogleTest tests
607. an example of using a custom software model
608. Declare a custom component
609. Register a custom component
610. Declare a custom binary
611. Register a custom binary
612. Declare a custom source set
613. Register a custom source set
614. Generates documentation binaries
615. Generates tasks for text source sets
616. Register a custom source set
617. an example of using a custom software model
618. components report
619. public type and internal view declaration
620. type registration
621. public and internal data mutation
622. example build script and model report output
623. Module dependencies
624. File dependencies
625. Generated file dependencies
626. Project dependencies
627. Gradle API dependencies
628. Gradle TestKit dependencies
629. Gradle's Groovy dependencies
630. Flat repository resolver
631. Adding central Maven repository
632. Adding Bintray's JCenter Maven repository
633. Adding Google Maven repository
634. Adding the local Maven cache as a repository
635. Adding custom Maven repository
636. Adding additional Maven repositories for JAR files
637. Accessing password-protected Maven repository
638. Ivy repository
639. Ivy repository with named layout
640. Ivy repository with pattern layout
641. Ivy repository with multiple custom patterns
642. Ivy repository with Maven compatible layout
643. Ivy repository with authentication
644. Declaring a Maven and Ivy repository
645. Using the SFTP protocol for a repository
646. Declaring a S3 backed Maven and Ivy repository
647. Declaring a S3 backed Maven and Ivy repository using IAM
648. Declaring a Google Cloud Storage backed Maven and Ivy repository using default application credentials
649. Configure repository to use only digest authentication
650. Configure repository to use preemptive authentication
B.1. Variables scope: local and script wide
B.2. Distinct configuration and execution phase

About Gradle

Introduction

We would like to introduce Gradle to you, a build system that we think is a quantum leap for build technology in the Java (JVM) world. Gradle provides:

  • A very flexible general purpose build tool like Ant.

  • Switchable, build-by-convention frameworks a la Maven. But we never lock you in!

  • Very powerful support for multi-project builds.

  • Very powerful dependency management (based on Apache Ivy).

  • Full support for your existing Maven or Ivy repository infrastructure.

  • Support for transitive dependency management without the need for remote repositories or pom.xml and ivy.xml files.

  • Ant tasks and builds as first class citizens.

  • Groovy build scripts.

  • A rich domain model for describing your build.

In Overview you will find a detailed overview of Gradle. Otherwise, the guides are waiting, have fun :)

About this user guide

This user guide, like Gradle itself, is under very active development. Some parts of Gradle aren’t documented as completely as they need to be. Some of the content presented won’t be entirely clear or will assume that you know more about Gradle than you do. We need your help to improve this user guide. You can find out more about contributing to the documentation at the Gradle web site.

Throughout the user guide, you will find some diagrams that represent dependency relationships between Gradle tasks. These use something analogous to the UML dependency notation, which renders an arrow from one task to the task that the first task depends on.

Overview

Features

Here is a list of some of Gradle’s features.

Declarative builds and build-by-convention

At the heart of Gradle lies a rich extensible Domain Specific Language (DSL) based on Groovy. Gradle pushes declarative builds to the next level by providing declarative language elements that you can assemble as you like. Those elements also provide build-by-convention support for Java, Groovy, OSGi, Web and Scala projects. Even more, this declarative language is extensible. Add your own new language elements or enhance the existing ones, thus providing concise, maintainable and comprehensible builds.

Language for dependency based programming

The declarative language lies on top of a general purpose task graph, which you can fully leverage in your builds. It provides utmost flexibility to adapt Gradle to your unique needs.

Structure your build

The suppleness and richness of Gradle finally allows you to apply common design principles to your build. For example, it is very easy to compose your build from reusable pieces of build logic. Inline stuff where unnecessary indirections would be inappropriate. Don’t be forced to tear apart what belongs together (e.g. in your project hierarchy). Avoid smells like shotgun changes or divergent change that turn your build into a maintenance nightmare. At last you can create a well structured, easily maintained, comprehensible build.

Deep API

From being a pleasure to be used embedded to its many hooks over the whole lifecycle of build execution, Gradle allows you to monitor and customize its configuration and execution behavior to its very core.

Gradle scales

Gradle scales very well. It significantly increases your productivity, from simple single project builds up to huge enterprise multi-project builds. This is true for structuring the build. With the state-of-art incremental build function, this is also true for tackling the performance pain many large enterprise builds suffer from.

Multi-project builds

Gradle’s support for multi-project build is outstanding. Project dependencies are first class citizens. We allow you to model the project relationships in a multi-project build as they really are for your problem domain. Gradle follows your layout not vice versa.

Gradle provides partial builds. If you build a single subproject Gradle takes care of building all the subprojects that subproject depends on. You can also choose to rebuild the subprojects that depend on a particular subproject. Together with incremental builds this is a big time saver for larger builds.

Many ways to manage your dependencies

Different teams prefer different ways to manage their external dependencies. Gradle provides convenient support for any strategy. From transitive dependency management with remote Maven and Ivy repositories to jars or directories on the local file system.

Gradle is the first build integration tool

Ant tasks are first class citizens. Even more interesting, Ant projects are first class citizens as well. Gradle provides a deep import for any Ant project, turning Ant targets into native Gradle tasks at runtime. You can depend on them from Gradle, you can enhance them from Gradle, you can even declare dependencies on Gradle tasks in your build.xml. The same integration is provided for properties, paths, etc …​

Gradle fully supports your existing Maven or Ivy repository infrastructure for publishing and retrieving dependencies. Gradle also provides a converter for turning a Maven pom.xml into a Gradle script. Runtime imports of Maven projects will come soon.

Ease of migration

Gradle can adapt to any structure you have. Therefore you can always develop your Gradle build in the same branch where your production build lives and both can evolve in parallel. We usually recommend to write tests that make sure that the produced artifacts are similar. That way migration is as less disruptive and as reliable as possible. This is following the best-practices for refactoring by applying baby steps.

Groovy

Gradle’s build scripts are written in Groovy, not XML. But unlike other approaches this is not for simply exposing the raw scripting power of a dynamic language. That would just lead to a very difficult to maintain build. The whole design of Gradle is oriented towards being used as a language, not as a rigid framework. And Groovy is our glue that allows you to tell your individual story with the abstractions Gradle (or you) provide. Gradle provides some standard stories but they are not privileged in any form. This is for us a major distinguishing feature compared to other declarative build systems. Our Groovy support is not just sugar coating. The whole Gradle API is fully Groovy-ized. Adding Groovy results in an enjoyable and productive experience.

The Gradle wrapper

The Gradle Wrapper allows you to execute Gradle builds on machines where Gradle is not installed. This is useful for example for some continuous integration servers. It is also useful for an open source project to keep the barrier low for building it. The wrapper is also very interesting for the enterprise. It is a zero administration approach for the client machines. It also enforces the usage of a particular Gradle version thus minimizing support issues.

Free and open source

Gradle is an open source project, and is licensed under the ASL.

Why Groovy?

We think the advantages of an internal DSL (based on a dynamic language) over XML are tremendous when used in build scripts. There are a couple of dynamic languages out there. Why Groovy? The answer lies in the context Gradle is operating in. Although Gradle is a general purpose build tool at its core, its main focus are Java projects. In such projects the team members will be very familiar with Java. We think a build should be as transparent as possible to all team members.

In that case, you might argue why we don’t just use Java as the language for build scripts. We think this is a valid question. It would have the highest transparency for your team and the lowest learning curve, but because of the limitations of Java, such a build language would not be as nice, expressive and powerful as it could be.[1] Languages like Python, Groovy or Ruby do a much better job here. We have chosen Groovy as it offers by far the greatest transparency for Java people. Its base syntax is the same as Java’s as well as its type system, its package structure and other things. Groovy provides much more on top of that, but with the common foundation of Java.

For Java developers with Python or Ruby knowledge or the desire to learn them, the above arguments don’t apply. The Gradle design is well-suited for creating another build script engine in JRuby or Jython. It just doesn’t have the highest priority for us at the moment. We happily support any community effort to create additional build script engines.



[1] At http://www.defmacro.org/ramblings/lisp.html you find an interesting article comparing Ant, XML, Java and Lisp. It’s funny that the 'if Java had that syntax' syntax in this article is actually the Groovy syntax.

Working with existing builds

Installing Gradle

Prerequisites

Gradle requires a Java JDK or JRE to be installed, version 7 or higher (to check, use java -version). Gradle ships with its own Groovy library, therefore Groovy does not need to be installed. Any existing Groovy installation is ignored by Gradle.

Gradle uses whatever JDK it finds in your path. Alternatively, you can set the JAVA_HOME environment variable to point to the installation directory of the desired JDK.

Download

You can download one of the Gradle distributions from the Gradle web site.

Unpacking

The Gradle distribution comes packaged as a ZIP. The full distribution contains:

  • The Gradle binaries.

  • The user guide (HTML and PDF).

  • The DSL reference guide.

  • The API documentation (Javadoc).

  • Extensive samples, including the examples referenced in the user guide, along with some complete and more complex builds you can use as a starting point for your own build.

  • The binary sources. This is for reference only. If you want to build Gradle you need to download the source distribution or checkout the sources from the source repository. See the Gradle web site for details.

Environment variables

For running Gradle, firstly add the environment variable GRADLE_HOME. This should point to the unpacked files from the Gradle website. Next add GRADLE_HOME/bin to your PATH environment variable. Usually, this is sufficient to run Gradle.

Running and testing your installation

You run Gradle via the gradle command. To check if Gradle is properly installed just type gradle -v. The output shows the Gradle version and also the local environment configuration (Groovy, JVM version, OS, etc.). The displayed Gradle version should match the distribution you have downloaded.

JVM options

JVM options for running Gradle can be set via environment variables. You can use either GRADLE_OPTS or JAVA_OPTS, or both. JAVA_OPTS is by convention an environment variable shared by many Java applications. A typical use case would be to set the HTTP proxy in JAVA_OPTS and the memory options in GRADLE_OPTS. Those variables can also be set at the beginning of the gradle or gradlew script.

Note that it’s not currently possible to set JVM options for Gradle on the command line.

Command-Line Interface

The command-line interface is one of the primary methods of interacting with Gradle. The following serves as a reference of executing and customizing Gradle use of a command-line or when writing scripts or configuring continuous integration.

Use of the Gradle Wrapper is highly encouraged. You should substitute ./gradlew or gradlew.bat for gradle in all following examples when using the Wrapper.

Executing Gradle on the command-line conforms to the following structure. Options are allowed before and after task names.

gradle [taskName...] [--option-name...]

If multiple tasks are specified, they should be separated with a space.

Options that accept values can be specified with or without = between the option and argument; however, use of = is recommended.

--console=plain

Options that enable behavior have long-form options with inverses specified with --no-. The following are opposites.

--build-cache
--no-build-cache

Many long-form options, have short option equivalents. The following are equivalent:

--help
-h

Many command-line flags can be specified in gradle.properties to avoid needing to be typed. See the configuring build environment guide for details.

The following sections describe use of the Gradle command-line interface, grouped roughly by user goal. Some plugins also add their own command line options, for example --tests for Java test filtering.

Executing tasks

You can run a task and all of its dependencies.

❯ gradle myTask

You can learn about what projects and tasks are available in the project reporting section.

Executing tasks in multi-project builds

In a multi-project build, subproject tasks can be executed with ":" separating subproject name and task name. The following are equivalent when run from the root project.

❯ gradle :mySubproject:taskName
❯ gradle mySubproject:taskName

You can also run a task for all subprojects using the task name only. For example, this will run the "test" task for all subprojects when invoked from the root project directory.

❯ gradle test

When invoking Gradle from within a subproject, the project name should be omitted:

❯ cd mySubproject
❯ gradle taskName

When executing the Gradle Wrapper from subprojects, one must reference gradlew relatively. For example: ../gradlew taskName. The community gdub project aims to make this more convenient.

Executing multiple tasks

You can also specify multiple tasks. For example, the following will execute the test and deploy tasks in the order that they are listed on the command-line and will also execute the dependencies for each task.

❯ gradle test deploy

Excluding tasks from execution

You can exclude a task from being executed using the -x or --exclude-task command-line option and providing the name of the task to exclude.

Figure 1. Example Task Graph

Example Task Graph

Example 1. Excluding tasks

Output of gradle dist --exclude-task test

> gradle dist --exclude-task test
:compile
compiling source
:dist
building the distribution

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

You can see that the test task is not executed, even though it is a dependency of the dist task. The test task’s dependencies such as compileTest are not executed either. Those dependencies of test that are required by another task, such as compile, are still executed.

Forcing tasks to execute

You can force Gradle to execute all tasks ignoring up-to-date checks using the --rerun-tasks option:

❯ gradle test --rerun-tasks

This will force test and all task dependencies of test to execute. It’s a little like running gradle clean test, but without the build’s generated output being deleted.

Continuing the build when a failure occurs

By default, Gradle will abort execution and fail the build as soon as any task fails. This allows the build to complete sooner, but hides other failures that would have occurred. In order to discover as many failures as possible in a single build execution, you can use the --continue option.

❯ gradle test --continue

When executed with --continue, Gradle will execute every task to be executed where all of the dependencies for that task completed without failure, instead of stopping as soon as the first failure is encountered. Each of the encountered failures will be reported at the end of the build.

If a task fails, any subsequent tasks that were depending on it will not be executed. For example, tests will not run if there is a compilation failure in the code under test; because the test task will depend on the compilation task (either directly or indirectly).

Task name abbreviation

When you specify tasks on the command-line, you don’t have to provide the full name of the task. You only need to provide enough of the task name to uniquely identify the task. For example, it’s likely gradle che is enough for Gradle to identify the check task.

You can also abbreviate each word in a camel case task name. For example, you can execute task compileTest by running gradle compTest or even gradle cT.

Example 2. Abbreviated camel case task name

Output of gradle cT

> gradle cT
:compile
compiling source
:compileTest
compiling unit tests

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

You can also use these abbreviations with the -x command-line option.

Common tasks

The following are task conventions applied by built-in and most major Gradle plugins.

Computing all outputs

It is common in Gradle builds for the build task to designate assembling all outputs and running all checks.

❯ gradle build

Running applications

It is common for applications to be run with the run task, which assembles the application and executes some script or binary.

❯ gradle run

Running all checks

It is common for all verification tasks, including tests and linting, to be executed using the check task.

❯ gradle check

Cleaning outputs

You can delete the contents of the build directory using the clean task, though doing so will cause pre-computed outputs to be lost, causing significant additional build time for the subsequent task execution.

❯ gradle clean

Project reporting

Gradle provides several built-in tasks which show particular details of your build. This can be useful for understanding the structure and dependencies of your build, and for debugging problems.

You can get basic help about available reporting options using gradle help.

Listing projects

Running gradle projects gives you a list of the sub-projects of the selected project, displayed in a hierarchy.

❯ gradle projects

You also get a project report within build scans. Learn more about creating build scans.

Listing tasks

Running gradle tasks gives you a list of the main tasks of the selected project. This report shows the default tasks for the project, if any, and a description for each task.

❯ gradle tasks

By default, this report shows only those tasks which have been assigned to a task group. You can obtain more information in the task listing using the --all option.

❯ gradle tasks --all

Show task usage details

Running gradle help --task someTask gives you detailed information about a specific task.

Example 3. Obtaining detailed help for tasks

Output of gradle -q help --task libs

> gradle -q help --task libs
Detailed task information for libs

Paths
     :api:libs
     :webapp:libs

Type
     Task (org.gradle.api.Task)

Description
     Builds the JAR

Group
     build

This information includes the full task path, the task type, possible command line options and the description of the given task.

Reporting dependencies

Build scans give a full, visual report of what project and binary dependencies exist on which configurations, transitive dependencies, and dependency version selection.

❯ gradle myTask --scan

This will give you a link to a web-based report, where you can find dependency information like this.

Build Scan dependencies report

Learn more in Inspecting Dependencies.

Listing project dependencies

Running gradle dependencies gives you a list of the dependencies of the selected project, broken down by configuration. For each configuration, the direct and transitive dependencies of that configuration are shown in a tree. Below is an example of this report:

❯ gradle dependencies

Concrete examples of build scripts and output available in the Inspecting Dependencies.

Running gradle buildEnvironment visualises the buildscript dependencies of the selected project, similarly to how gradle dependencies visualizes the dependencies of the software being built.

❯ gradle buildEnvironment

Running gradle dependencyInsight gives you an insight into a particular dependency (or dependencies) that match specified input.

❯ gradle dependencyInsight

Since a dependency report can get large, it can be useful to restrict the report to a particular configuration. This is achieved with the optional --configuration parameter:

Listing project properties

Running gradle properties gives you a list of the properties of the selected project.

Example 4. Information about properties

Output of gradle -q api:properties

> gradle -q api:properties

------------------------------------------------------------
Project :api - The shared API for the application
------------------------------------------------------------

allprojects: [project ':api']
ant: org.gradle.api.internal.project.DefaultAntBuilder@12345
antBuilderFactory: org.gradle.api.internal.project.DefaultAntBuilderFactory@12345
artifacts: org.gradle.api.internal.artifacts.dsl.DefaultArtifactHandler_Decorated@12345
asDynamicObject: DynamicObject for project ':api'
baseClassLoaderScope: org.gradle.api.internal.initialization.DefaultClassLoaderScope@12345
buildDir: /home/user/gradle/samples/userguide/tutorial/projectReports/api/build
buildFile: /home/user/gradle/samples/userguide/tutorial/projectReports/api/build.gradle

Software Model reports

You can get a hierarchical view of elements for software model projects using the model task:

❯ gradle model

Learn more about the model report in the software model documentation.

Command-line completion

Gradle provides bash and zsh tab completion support for tasks, options, and Gradle properties through gradle-completion, installed separately.

Figure 2. Gradle Completion

Gradle Completion

Debugging options

-?, -h, --help

Shows a help message with all available CLI options.

-v, --version

Prints Gradle, Groovy, Ant, JVM, and operating system version information.

-S, --full-stacktrace

Print out the full (very verbose) stacktrace for any exceptions. See also logging options.

-s, --stacktrace

Print out the stacktrace also for user exceptions (e.g. compile error). See also logging options.

--scan

Create a build scan with fine-grained information about all aspects of your Gradle build.

-Dorg.gradle.debug=true

Debug Gradle client (non-Daemon) process. Gradle will wait for you to attach a debugger at localhost:5005 by default.

-Dorg.gradle.daemon.debug=true

Debug Gradle Daemon process.

Performance options

Try these options when optimizing build performance. Learn more about improving performance of Gradle builds here.

Many of these options can be specified in gradle.properties so command-line flags are not necessary. See the configuring build environment guide.

--build-cache, --no-build-cache

Toggles the Gradle build cache. Gradle will try to reuse outputs from previous builds. Default is off.

--configure-on-demand, --no-configure-on-demand

Toggles Configure-on-demand. Only relevant projects are configured in this build run. Default is off.

--max-workers

Sets maximum number of workers that Gradle may use. Default is number of processors.

--parallel, --no-parallel

Build projects in parallel. For limitations of this option please see the section called “Parallel project execution”. Default is off.

--profile

Generates a high-level performance report in the $buildDir/reports/profile directory. --scan is preferred.

--scan

Generate a build scan with detailed performance diagnostics.

Build Scan performance report

Gradle daemon options

You can manage the Gradle Daemon through the following command line options.

--daemon, --no-daemon

Use the Gradle Daemon to run the build. Starts the daemon if not running or existing daemon busy. Default is on.

--foreground

Starts the Gradle Daemon in a foreground process.

--status (Standalone command)

Run gradle --status to list running and recently stopped Gradle daemons. Only displays daemons of the same Gradle version.

--stop (Standalone command)

Run gradle --stop to stop all Gradle Daemons of the same version.

-Dorg.gradle.daemon.idletimeout=(number of milliseconds)

Gradle Daemon will stop itself after this number of milliseconds of idle time. Default is 10800000 (3 hours).

Logging options

Setting log level

You can customize the verbosity of Gradle logging with the following options, ordered from least verbose to most verbose. Learn more in the logging documentation.

-Dorg.gradle.logging.level=(quiet,warn,lifecycle,info,debug)

Set logging level via Gradle properties.

-q, --quiet

Log errors only.

-w, --warn

Set log level to warn.

-i, --info

Set log level to info.

-d, --debug

Log in debug mode (includes normal stacktrace).

Lifecycle is the default log level.

Customizing log format

You can control the use of rich output (colors and font variants) by specifying the "console" mode in the following ways:

-Dorg.gradle.console=(auto,plain,rich,verbose)

Specify console mode via Gradle properties. Different modes described immediately below.

--console=(auto,plain,rich,verbose)

Specifies which type of console output to generate.

Set to plain to generate plain text only. This option disables all color and other rich output in the console output. This is the default when Gradle is not attached to a terminal.

Set to auto (the default) to enable color and other rich output in the console output when the build process is attached to a console, or to generate plain text only when not attached to a console. This is the default when Gradle is attached to a terminal.

Set to rich to enable color and other rich output in the console output, regardless of whether the build process is not attached to a console. When not attached to a console, the build output will use ANSI control characters to generate the rich output.

Set to verbose to enable color and other rich output like the rich, but output task names and outcomes at the lifecycle log level, as is done by default in Gradle 3.5 and earlier.

Showing or hiding warnings

By default, Gradle won’t display all warnings (e.g. deprecation warnings). Instead, Gradle will collect them and render a summary at the end of the build like:

Deprecated Gradle API and/or features were used in this build, making it incompatible with Gradle 5.0.

You can control the verbosity of warnings on the console with the following options:

-Dorg.gradle.warning.mode=(all,none,summary)

Specify warning mode via Gradle properties. Different modes described immediately below.

--warning-mode=(all,none,summary)

Specifies how to log warnings. Default is summary.

Set to all to log all warnings.

Set to summary to suppress all warnings and log a summary at the end of the build.

Set to none to suppress all warnings, including the summary at the end of the build.

Rich Console

Gradle’s rich console displays extra information while builds are running.

Gradle Rich Console

Features:

  • Logs above grouped by task that generated them

  • Progress bar and timer visually describe overall status

  • Parallel work-in-progress lines below describe what is happening now

Execution options

The following options affect how builds are executed, by changing what is built or how dependencies are resolved.

--include-build

Run the build as a composite, including the specified build. See Composite Builds.

--offline

Specifies that the build should operate without accessing network resources. Learn more about options to override dependency caching.

--refresh-dependencies

Refresh the state of dependencies. Learn more about how to use this in the dependency management docs.

--dry-run

Run Gradle with all task actions disabled. Use this to show which task would have executed.

Bootstrapping new projects

Creating new Gradle builds

Use the built-in gradle init task to create a new Gradle builds, with new or existing projects.

❯ gradle init

Most of the time you’ll want to specify a project type. Available types include basic (default), java-library, java-application, and more. See init plugin documentation for details.

❯ gradle init --type java-library

Standardize and provision Gradle

The built-in gradle wrapper task generates a script, gradlew, that invokes a declared version of Gradle, downloading it beforehand if necessary.

❯ gradle wrapper --gradle-version=4.4

You can also specify --distribution-type=(bin|all), --gradle-distribution-url, --gradle-distribution-sha256-sum in addition to --gradle-version. Full details on how to use these options are documented in the Gradle wrapper section.

Environment options

You can customize many aspects about where build scripts, settings, caches, and so on through the options below. Learn more about customizing your build environment.

-b, --build-file

Specifies the build file. For example: gradle --build-file=foo.gradle. The default is build.gradle, then build.gradle.kts, then myProjectName.gradle.

-c, --settings-file

Specifies the settings file. For example: gradle --settings-file=somewhere/else/settings.gradle

-g, --gradle-user-home

Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.

-p, --project-dir

Specifies the start directory for Gradle. Defaults to current directory.

--project-cache-dir

Specifies the project-specific cache directory. Default value is .gradle in the root project directory.

-u, --no-search-upward (deprecated)

Don’t search in parent directories for a settings.gradle file.

-D, --system-prop

Sets a system property of the JVM, for example -Dmyprop=myvalue. See the section called “System properties”.

-I, --init-script

Specifies an initialization script. See Initialization Scripts.

-P, --project-prop

Sets a project property of the root project, for example -Pmyprop=myvalue. See the section called “Project properties”.

-Dorg.gradle.jvmargs

Set JVM arguments.

-Dorg.gradle.java.home

Set JDK home dir.

The Gradle Wrapper

The recommended way to execute any Gradle build is with the help of the Gradle Wrapper (in short just “Wrapper”). The Wrapper is a script that invokes a declared version of Gradle, downloading it beforehand if necessary. As a result, developers can get up and running with a Gradle project quickly without having to follow manual installation processes saving your company time and money.

Figure 3. The Wrapper workflow

The Wrapper workflow

In a nutshell you gain the following benefits:

  • Standardizes a project on a given Gradle version, leading to more reliable and robust builds.

  • Provisioning a new Gradle version to different users and execution environment (e.g. IDEs or Continuous Integration servers) is as simple as changing the Wrapper definition.

So how does it work? For a user there are typically three different workflows:

The following sections explain each of these use cases in more detail.

Adding the Gradle Wrapper

Generating the Wrapper files requires an installed version of the Gradle runtime on your machine as described in Installing Gradle. Thankfully, generating the initial Wrapper files is a one-time process.

Every vanilla Gradle build comes with a built-in task called wrapper. You’ll be able to find the task listed under the group "Build Setup tasks" when listing the tasks. Executing the wrapper task generates the necessary Wrapper files in the project directory.

Example 5. Running the Wrapper task

Output of gradle wrapper

> gradle wrapper
:wrapper

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

To make the Wrapper files available to other developers and execution environments you’ll need to check them into version control. All Wrapper files including the JAR file are very small in size. Adding the JAR file to version control is expected. Some organizations do not allow projects to submit binary files to version control. At the moment there are no alternative options to the approach.

The generated Wrapper properties file, gradle/wrapper/gradle-wrapper.properties, stores the information about the Gradle distribution.

  • The server hosting the Gradle distribution.

  • The type of Gradle distribution. By default that’s the -bin distribution containing only the runtime but no sample code and documentation.

  • The Gradle version used for executing the build. By default the wrapper task picks the exact same Gradle version that was used to generate the Wrapper files.

Example 6. The generated distribution URL

gradle/wrapper/gradle-wrapper.properties

distributionUrl=https\://services.gradle.org/distributions/gradle-4.3.1-bin.zip


All of those aspects are configurable at the time of generating the Wrapper files with the help of the following command line options.

--gradle-version

The Gradle version used for downloading and executing the Wrapper.

--distribution-type

The Gradle distribution type used for the Wrapper. Available options are bin and all. The default value is bin.

--gradle-distribution-url

The full URL pointing to Gradle distribution ZIP file. Using this option makes --gradle-version and --distribution-type obsolete as the URL already contains this information. This option is extremely valuable if you want to host the Gradle distribution inside your company’s network.

--gradle-distribution-sha256-sum

The SHA256 hash sum used for verifying the downloaded Gradle distribution.

Let’s assume the following use case to illustrate the use of the command line options. You would like to generate the Wrapper with version 4.0 and use the -all distribution to enable your IDE to enable code-completion and being able to navigate to the Gradle source code. Those requirements are captured by the following command line execution:

Example 7. Providing options to Wrapper task

Output of gradle wrapper --gradle-version 4.0 --distribution-type all

> gradle wrapper --gradle-version 4.0 --distribution-type all
:wrapper

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

As a result you can find the desired information in the Wrapper properties file.

Example 8. The generated distribution URL

gradle/wrapper/gradle-wrapper.properties

distributionUrl=https\://services.gradle.org/distributions/gradle-4.0-all.zip


Let’s have a look at the following project layout to illustrate the expected Wrapper files:

.
├── build.gradle
├── settings.gradle
├── gradle
│   └── wrapper
│       ├── gradle-wrapper.jar
│       └── gradle-wrapper.properties
├── gradlew
└── gradlew.bat

A Gradle project typically provides a build.gradle and a settings.gradle file. The Wrapper files live alongside in the gradle directory and the root directory of the project. The following list explains their purpose.

gradle-wrapper.jar

The Wrapper JAR file containing code for downloading the Gradle distribution.

gradle-wrapper.properties

A properties file responsible for configuring the Wrapper runtime behavior e.g. the Gradle version compatible with this version.

gradlew, gradlew.bat

A shell script and a Windows batch script for executing the build with the Wrapper.

You can go ahead and execute the build with the Wrapper without having to install the Gradle runtime. If the project you are working on does not contain those Wrapper files then you’ll need to generate them.

Using the Gradle Wrapper

It is recommended to always execute a build with the Wrapper to ensure a reliable, controlled and standardized execution of the build. Using the Wrapper looks almost exactly like running the build with a Gradle installation. Depending on the operation system you either run gradlew or gradlew.bat instead of the gradle command. The following console output demonstrate the use of the Wrapper on a Windows machine for a Java-based project.

Example 9. Executing the build with the Wrapper batch file

Output of gradlew.bat build

> gradlew.bat build
Downloading https://services.gradle.org/distributions/gradle-4.0-all.zip
.....................................................................................
Unzipping C:\Documents and Settings\Claudia\.gradle\wrapper\dists\gradle-4.0-all\ac27o8rbd0ic8ih41or9l32mv\gradle-4.0-all.zip to C:\Documents and Settings\Claudia\.gradle\wrapper\dists\gradle-4.0-al\ac27o8rbd0ic8ih41or9l32mv
Set executable permissions for: C:\Documents and Settings\Claudia\.gradle\wrapper\dists\gradle-4.0-all\ac27o8rbd0ic8ih41or9l32mv\gradle-4.0\bin\gradle

BUILD SUCCESSFUL in 12s
1 actionable task: 1 executed

In case the Gradle distribution is not available on the machine, the Wrapper will download it and store in the local file system. Any subsequent build invocation is going to reuse the existing local distribution as long as the distribution URL in the Gradle properties doesn’t change.

The Wrapper shell script and batch file reside in the root directory of a single or multi-project Gradle build. You will need to reference the correct path to those files in case you want to execute the build from a subproject directory e.g. ../../gradlew tasks.

Upgrading the Gradle Wrapper

Projects will typically want to keep up with the times and upgrade their Gradle version to benefit from new features and improvements. One way to upgrade the Gradle version is manually change the distributionUrl property in the Wrapper property file. The better and recommended option is to run the wrapper task and provide the target Gradle version as described in the section called “Adding the Gradle Wrapper”. Using the wrapper task ensures that any optimizations made to the Wrapper shell script or batch file with that specific Gradle version are applied to the project. As usual you’d commit the changes to the Wrapper files to version control.

Use the Gradle wrapper task to generate the wrapper, specifying a version. The default is the current version, which you can check by executing ./gradlew --version.

Example 10. Upgrading the Wrapper version

Output of ./gradlew wrapper --gradle-version 4.2.1

> ./gradlew wrapper --gradle-version 4.2.1

BUILD SUCCESSFUL in 4s
1 actionable task: 1 executed

Example 11. Checking the Wrapper version after upgrading

Output of ./gradlew -v

> ./gradlew -v
Downloading https://services.gradle.org/distributions/gradle-4.2.1-bin.zip
...................................................................
Unzipping /Users/claudia/.gradle/wrapper/dists/gradle-4.2.1-bin/dajvke9o8kmaxbu0kc5gcgeju/gradle-4.2.1-bin.zip to /Users/claudia/.gradle/wrapper/dists/gradle-4.2.1-bin/dajvke9o8kmaxbu0kc5gcgeju
Set executable permissions for: /Users/claudia/.gradle/wrapper/dists/gradle-4.2.1-bin/dajvke9o8kmaxbu0kc5gcgeju/gradle-4.2.1/bin/gradle

------------------------------------------------------------
Gradle 4.2.1
------------------------------------------------------------

Build time:   2017-10-02 15:36:21 UTC
Revision:     a88ebd6be7840c2e59ae4782eb0f27fbe3405ddf

Groovy:       2.4.12
Ant:          Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM:          1.8.0_60 (Oracle Corporation 25.60-b23)
OS:           Mac OS X 10.13.1 x86_64

Customizing the Gradle Wrapper

Most users of Gradle are happy with the default runtime behavior of the Wrapper. However, organizational policies, security constraints or personal preferences might require you to dive deeper into customizing the Wrapper. Thankfully, the built-in wrapper task exposes numerous options to bend the runtime behavior to your needs. Most configuration options are exposed by the underlying task type Wrapper.

Let’s assume you grew tired of defining the -all distribution type on the command line every time you upgrade the Wrapper. You can save yourself some keyboard strokes by re-configuring the wrapper task.

Example 12. Customizing the Wrapper task

build.gradle

wrapper {
    distributionType = Wrapper.DistributionType.ALL
}

With the configuration in place running ./gradlew wrapper --gradle-version 4.1 is enough to produce a distributionUrl value in the Wrapper properties file that will request the -all distribution.

Example 13. The generated distribution URL

gradle/wrapper/gradle-wrapper.properties

distributionUrl=https\://services.gradle.org/distributions/gradle-4.1-all.zip


Check out the API documentation for more detail descriptions of the available configuration options. You can also find various samples for configuring the Wrapper in the Gradle distribution.

Authenticated Gradle distribution download

The Gradle Wrapper can download Gradle distributions from servers using HTTP Basic Authentication. This enables you to host the Gradle distribution on a private protected server. You can specify a username and password in two different ways depending on your use case: as system properties or directly embedded in the distributionUrl. Credentials in system properties take precedence over the ones embedded in distributionUrl.

Security Warning

HTTP Basic Authentication should only be used with HTTPS URLs and not plain HTTP ones. With Basic Authentication, the user credentials are sent in clear text.

Using system properties can be done in the .gradle/gradle.properties file in the user’s home directory, or by other means, see the section called “Gradle properties”.

Example 14. Specifying the HTTP Basic Authentication credentials using system properties

gradle.properties

systemProp.gradle.wrapperUser=username
systemProp.gradle.wrapperPassword=password


Embedding credentials in the distributionUrl in the gradle/wrapper/gradle-wrapper.properties file also works. Please note that this file is to be committed into your source control system. Shared credentials embedded in distributionUrl should only be used in a controlled environment.

Example 15. Specifying the HTTP Basic Authentication credentials in distributionUrl

gradle/wrapper/gradle-wrapper.properties

distributionUrl=https://username:password@somehost/path/to/gradle-distribution.zip


This can be used in conjunction with a proxy, authenticated or not. See the section called “Accessing the web through a HTTP proxy” for more information on how to configure the Wrapper to use a proxy.

Verification of downloaded Gradle distributions

The Gradle Wrapper allows for verification of the downloaded Gradle distribution via SHA-256 hash sum comparison. This increases security against targeted attacks by preventing a man-in-the-middle attacker from tampering with the downloaded Gradle distribution.

To enable this feature, download the .sha256 file associated with the Gradle distribution you want to verify.

Downloading the SHA-256 file

You can download the .sha256 file from the stable releases or release candidate and nightly releases. The format of the file is a single line of text that is the SHA-256 hash of the corresponding zip file.

Configuring checksum verification

Add the downloaded hash sum to gradle-wrapper.properties using the distributionSha256Sum property or use --gradle-distribution-sha256-sum on the command-line.

Example 16. Configuring SHA-256 checksum verification

gradle/wrapper/gradle-wrapper.properties

distributionSha256Sum=371cb9fbebbe9880d147f59bab36d61eee122854ef8c9ee1ecf12b82368bcf10


Gradle will report a build failure in case the configured checksum does not match the checksum found on the server for hosting the distribution. Checksum Verification is only performed if the configured Wrapper distribution hasn’t been downloaded yet.

The Gradle Daemon

From Wikipedia…

A daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user.

Gradle runs on the Java Virtual Machine (JVM) and uses several supporting libraries that require a non-trivial initialization time. As a result, it can sometimes seem a little slow to start. The solution to this problem is the Gradle Daemon: a long-lived background process that executes your builds much more quickly than would otherwise be the case. We accomplish this by avoiding the expensive bootstrapping process as well as leveraging caching, by keeping data about your project in memory. Running Gradle builds with the Daemon is no different than without. Simply configure whether you want to use it or not - everything else is handled transparently by Gradle.

Why the Gradle Daemon is important for performance

The Daemon is a long-lived process, so not only are we able to avoid the cost of JVM startup for every build, but we are able to cache information about project structure, files, tasks, and more in memory.

The reasoning is simple: improve build speed by reusing computations from previous builds. However, the benefits are dramatic: we typically measure build times reduced by 15-75% on subsequent builds. We recommend profiling your build by using --profile to get a sense of how much impact the Gradle Daemon can have for you.

The Gradle Daemon is enabled by default starting with Gradle 3.0, so you don’t have to do anything to benefit from it.

If you run CI builds in ephemeral environments (such as containers) that do not reuse any processes, use of the Daemon will slightly decrease performance (due to caching additional information) for no benefit, and may be disabled.

Running Daemon Status

To get a list of running Gradle Daemons and their statuses use the --status command.

Sample output:

  PID VERSION                 STATUS
28411 3.0                     IDLE
34247 3.0                     BUSY

Currently, a given Gradle version can only connect to daemons of the same version. This means the status output will only show Daemons for the version of Gradle being invoked and not for any other versions. Future versions of Gradle will lift this constraint and will show the running Daemons for all versions of Gradle.

Disabling the Daemon

The Gradle Daemon is enabled by default, and we recommend always enabling it. There are several ways to disable the Daemon, but the most common one is to add the line

org.gradle.daemon=false

to the file «USER_HOME»/.gradle/gradle.properties, where «USER_HOME» is your home directory. That’s typically one of the following, depending on your platform:

  • C:\Users\<username> (Windows Vista & 7+)

  • /Users/<username> (macOS)

  • /home/<username> (Linux)

If that file doesn’t exist, just create it using a text editor. You can find details of other ways to disable (and enable) the Daemon in the section called “FAQ” further down. That section also contains more detailed information on how the Daemon works.

Note that having the Daemon enabled, all your builds will take advantage of the speed boost, regardless of the version of Gradle a particular build uses.

Continuous integration

Since Gradle 3.0, we enable Daemon by default and recommend using it for both developers' machines and Continuous Integration servers. However, if you suspect that Daemon makes your CI builds unstable, you can disable it to use a fresh runtime for each build since the runtime is completely isolated from any previous builds.

Stopping an existing Daemon

As mentioned, the Daemon is a background process. You needn’t worry about a build up of Gradle processes on your machine, though. Every Daemon monitors its memory usage compared to total system memory and will stop itself if idle when available system memory is low. If you want to explicitly stop running Daemon processes for any reason, just use the command gradle --stop.

This will terminate all Daemon processes that were started with the same version of Gradle used to execute the command. If you have the Java Development Kit (JDK) installed, you can easily verify that a Daemon has stopped by running the jps command. You’ll see any running Daemons listed with the name GradleDaemon.

FAQ

How do I disable the Gradle Daemon?

There are two recommended ways to disable the Daemon persistently for an environment:

  • Via environment variables: add the flag -Dorg.gradle.daemon=false to the GRADLE_OPTS environment variable

  • Via properties file: add org.gradle.daemon=false to the «GRADLE_USER_HOME»/gradle.properties file

Note, «GRADLE_USER_HOME» defaults to «USER_HOME»/.gradle, where «USER_HOME» is the home directory of the current user. This location can be configured via the -g and --gradle-user-home command line switches, as well as by the GRADLE_USER_HOME environment variable and org.gradle.user.home JVM system property.

Both approaches have the same effect. Which one to use is up to personal preference. Most Gradle users choose the second option and add the entry to the user gradle.properties file.

On Windows, this command will disable the Daemon for the current user:

(if not exist "%USERPROFILE%/.gradle" mkdir "%USERPROFILE%/.gradle") && (echo. >> "%USERPROFILE%/.gradle/gradle.properties" && echo org.gradle.daemon=false >> "%USERPROFILE%/.gradle/gradle.properties")

On UNIX-like operating systems, the following Bash shell command will disable the Daemon for the current user:

mkdir -p ~/.gradle && echo "org.gradle.daemon=false" >> ~/.gradle/gradle.properties

Once the Daemon is disabled for a build environment in this way, a Gradle Daemon will not be started unless explicitly requested using the --daemon option.

The --daemon and --no-daemon command line options enable and disable usage of the Daemon for individual build invocations when using the Gradle command line interface. These command line options have the highest precedence when considering the build environment. Typically, it is more convenient to enable the Daemon for an environment (e.g. a user account) so that all builds use the Daemon without requiring to remember to supply the --daemon option.

Why is there more than one Daemon process on my machine?

There are several reasons why Gradle will create a new Daemon, instead of using one that is already running. The basic rule is that Gradle will start a new Daemon if there are no existing idle or compatible Daemons available. Gradle will kill any Daemon that has been idle for 3 hours or more, so you don’t have to worry about cleaning them up manually.

idle

An idle Daemon is one that is not currently executing a build or doing other useful work.

compatible

A compatible Daemon is one that can (or can be made to) meet the requirements of the requested build environment. The Java runtime used to execute the build is an example aspect of the build environment. Another example is the set of JVM system properties required by the build runtime.

Some aspects of the requested build environment may not be met by an Daemon. If the Daemon is running with a Java 7 runtime, but the requested environment calls for Java 8, then the Daemon is not compatible and another must be started. Moreover, certain properties of a Java runtime cannot be changed once the JVM has started. For example, it is not possible to change the memory allocation (e.g. -Xmx1024m), default text encoding, default locale, etc of a running JVM.

The “requested build environment” is typically constructed implicitly from aspects of the build client’s (e.g. Gradle command line client, IDE etc.) environment and explicitly via command line switches and settings. See Build Environment for details on how to specify and control the build environment.

The following JVM system properties are effectively immutable. If the requested build environment requires any of these properties, with a different value than a Daemon’s JVM has for this property, the Daemon is not compatible.

  • file.encoding

  • user.language

  • user.country

  • user.variant

  • java.io.tmpdir

  • javax.net.ssl.keyStore

  • javax.net.ssl.keyStorePassword

  • javax.net.ssl.keyStoreType

  • javax.net.ssl.trustStore

  • javax.net.ssl.trustStorePassword

  • javax.net.ssl.trustStoreType

  • com.sun.management.jmxremote

The following JVM attributes, controlled by startup arguments, are also effectively immutable. The corresponding attributes of the requested build environment and the Daemon’s environment must match exactly in order for a Daemon to be compatible.

  • The maximum heap size (i.e. the -Xmx JVM argument)

  • The minimum heap size (i.e. the -Xms JVM argument)

  • The boot classpath (i.e. the -Xbootclasspath argument)

  • The “assertion” status (i.e. the -ea argument)

The required Gradle version is another aspect of the requested build environment. Daemon processes are coupled to a specific Gradle runtime. Working on multiple Gradle projects during a session that use different Gradle versions is a common reason for having more than one running Daemon process.

How much memory does the Daemon use and can I give it more?

If the requested build environment does not specify a maximum heap size, the Daemon will use up to 1GB of heap. It will use the JVM’s default minimum heap size. 1GB is more than enough for most builds. Larger builds with hundreds of subprojects, lots of configuration, and source code may require, or perform better, with more memory.

To increase the amount of memory the Daemon can use, specify the appropriate flags as part of the requested build environment. Please see Build Environment for details.

How can I stop a Daemon?

Daemon processes will automatically terminate themselves after 3 hours of inactivity or less. If you wish to stop a Daemon process before this, you can either kill the process via your operating system or run the gradle --stop command. The --stop switch causes Gradle to request that all running Daemon processes, of the same Gradle version used to run the command, terminate themselves.

What can go wrong with Daemon?

Considerable engineering effort has gone into making the Daemon robust, transparent and unobtrusive during day to day development. However, Daemon processes can occasionally be corrupted or exhausted. A Gradle build executes arbitrary code from multiple sources. While Gradle itself is designed for and heavily tested with the Daemon, user build scripts and third party plugins can destabilize the Daemon process through defects such as memory leaks or global state corruption.

It is also possible to destabilize the Daemon (and build environment in general) by running builds that do not release resources correctly. This is a particularly poignant problem when using Microsoft Windows as it is less forgiving of programs that fail to close files after reading or writing.

Gradle actively monitors heap usage and attempts to detect when a leak is starting to exhaust the available heap space in the daemon. When it detects a problem, the Gradle daemon will finish the currently running build and proactively restart the daemon on the next build. This monitoring is enabled by default, but can be disabled by setting the org.gradle.daemon.performance.enable-monitoring system property to false.

If it is suspected that the Daemon process has become unstable, it can simply be killed. Recall that the --no-daemon switch can be specified for a build to prevent use of the Daemon. This can be useful to diagnose whether or not the Daemon is actually the culprit of a problem.

Tools & IDEs

The Gradle Tooling API (see Embedding Gradle using the Tooling API), that is used by IDEs and other tools to integrate with Gradle, always use the Gradle Daemon to execute builds. If you are executing Gradle builds from within you’re IDE you are using the Gradle Daemon and do not need to enable it for your environment.

How does the Gradle Daemon make builds faster?

The Gradle Daemon is a long lived build process. In between builds it waits idly for the next build. This has the obvious benefit of only requiring Gradle to be loaded into memory once for multiple builds, as opposed to once for each build. This in itself is a significant performance optimization, but that’s not where it stops.

A significant part of the story for modern JVM performance is runtime code optimization. For example, HotSpot (the JVM implementation provided by Oracle and used as the basis of OpenJDK) applies optimization to code while it is running. The optimization is progressive and not instantaneous. That is, the code is progressively optimized during execution which means that subsequent builds can be faster purely due to this optimization process. Experiments with HotSpot have shown that it takes somewhere between 5 and 10 builds for optimization to stabilize. The difference in perceived build time between the first build and the 10th for a Daemon can be quite dramatic.

The Daemon also allows more effective in memory caching across builds. For example, the classes needed by the build (e.g. plugins, build scripts) can be held in memory between builds. Similarly, Gradle can maintain in-memory caches of build data such as the hashes of task inputs and outputs, used for incremental building.

Dependency Management for Java Projects

This chapter introduces some of the basics of dependency management in Gradle.

What is dependency management?

Very roughly, dependency management is made up of two pieces. Firstly, Gradle needs to know about the things that your project needs to build or run, in order to find them. We call these incoming files the dependencies of the project. Secondly, Gradle needs to build and upload the things that your project produces. We call these outgoing files the publications of the project. Let’s look at these two pieces in more detail:

Most projects are not completely self-contained. They need files built by other projects in order to be compiled or tested and so on. For example, in order to use Hibernate in my project, I need to include some Hibernate jars in the classpath when I compile my source. To run my tests, I might also need to include some additional jars in the test classpath, such as a particular JDBC driver or the Ehcache jars.

These incoming files form the dependencies of the project. Gradle allows you to tell it what the dependencies of your project are, so that it can take care of finding these dependencies, and making them available in your build. The dependencies might need to be downloaded from a remote Maven or Ivy repository, or located in a local directory, or may need to be built by another project in the same multi-project build. We call this process dependency resolution.

Note that this feature provides a major advantage over Ant. With Ant, you only have the ability to specify absolute or relative paths to specific jars to load. With Gradle, you simply declare the “names” of your dependencies, and other layers determine where to get those dependencies from. You can get similar behavior from Ant by adding Apache Ivy, but Gradle does it better.

Often, the dependencies of a project will themselves have dependencies. For example, Hibernate core requires several other libraries to be present on the classpath with it runs. So, when Gradle runs the tests for your project, it also needs to find these dependencies and make them available. We call these transitive dependencies.

The main purpose of most projects is to build some files that are to be used outside the project. For example, if your project produces a Java library, you need to build a jar, and maybe a source jar and some documentation, and publish them somewhere.

These outgoing files form the publications of the project. Gradle also takes care of this important work for you. You declare the publications of your project, and Gradle take care of building them and publishing them somewhere. Exactly what “publishing” means depends on what you want to do. You might want to copy the files to a local directory, or upload them to a remote Maven or Ivy repository. Or you might use the files in another project in the same multi-project build. We call this process publication.

Declaring your dependencies

Let’s look at some dependency declarations. Here’s a basic build script:

Example 17. Declaring dependencies

build.gradle

apply plugin: 'java'

repositories {
    mavenCentral()
}

dependencies {
    compile group: 'org.hibernate', name: 'hibernate-core', version: '3.6.7.Final'
    testCompile group: 'junit', name: 'junit', version: '4.+'
}

What’s going on here? This build script says a few things about the project. Firstly, it states that Hibernate core 3.6.7.Final is required to compile the project’s production source. By implication, Hibernate core and its dependencies are also required at runtime. The build script also states that any junit >= 4.0 is required to compile the project’s tests. It also tells Gradle to look in the Maven central repository for any dependencies that are required. The following sections go into the details.

Dependency configurations

A Configuration is a named set of dependencies and artifacts. There are three main purposes for a Configuration:

Declaring Dependencies

The plugin uses configurations to make it easy for build authors to declare what other subprojects or external artifacts are needed for various purposes during the execution of tasks defined by the plugin.

Resolving Dependencies

The plugin uses configurations to find (and possibly download) inputs to the tasks it defines.

Exposing Artifacts for Consumption

The plugin uses configurations to define what artifacts it generates for other projects to consume.

With those three purposes in mind, let’s take a look at a few of the standard configurations defined by the Java Library Plugin. You can find more details in the section called “The Java Library plugin configurations”.

implementation

The dependencies required to compile the production source of the project, but which are not part of the api exposed by the project. This configuration is an example of a configuration used for Declaring Dependencies.

runtimeClasspath

The dependencies required by the production classes at runtime. By default, this includes the dependencies declared in the api, implementation, and runtimeOnly configurations. This configuration is an example of a configuration used for Resolving Dependencies, and as such, users should never declare dependencies directly in the runtimeClasspath configuration.

apiElements

The dependencies which are part of this project’s externally consumable API as well as the classes which are defined in this project which should be consumable by other projects. This configuration is an example of Exposing Artifacts for Consumption.

Various plugins add further standard configurations. You can also define your own custom configurations to use in your build. Please see the section called “Defining the scope of a dependency with configurations” for the details of defining and customizing dependency configurations.

External dependencies

There are various types of dependencies that you can declare. One such type is an external dependency. This is a dependency on some files built outside the current build, and stored in a repository of some kind, such as Maven central, or a corporate Maven or Ivy repository, or a directory in the local file system.

To define an external dependency, you add it to a dependency configuration:

Example 18. Definition of an external dependency

build.gradle

dependencies {
    compile group: 'org.hibernate', name: 'hibernate-core', version: '3.6.7.Final'
}

An external dependency is identified using group, name and version attributes. Depending on which kind of repository you are using, group and version may be optional.

The shortcut form for declaring external dependencies looks like “group:name:version”.

Example 19. Shortcut definition of an external dependency

build.gradle

dependencies {
    compile 'org.hibernate:hibernate-core:3.6.7.Final'
}

To find out more about defining dependencies, have a look at Declaring Dependencies.

Repositories

How does Gradle find the files for external dependencies? Gradle looks for them in a repository. A repository is really just a collection of files, organized by group, name and version. Gradle understands several different repository formats, such as Maven and Ivy, and several different ways of accessing the repository, such as using the local file system or HTTP.

By default, Gradle does not define any repositories. You need to define at least one before you can use external dependencies. One option is use the Maven central repository:

Example 20. Usage of Maven central repository

build.gradle

repositories {
    mavenCentral()
}

Or Bintray’s JCenter:

Example 21. Usage of JCenter repository

build.gradle

repositories {
    jcenter()
}

Or any other remote Maven repository:

Example 22. Usage of a remote Maven repository

build.gradle

repositories {
    maven {
        url "http://repo.mycompany.com/maven2"
    }
}

Or a remote Ivy repository:

Example 23. Usage of a remote Ivy directory

build.gradle

repositories {
    ivy {
        url "http://repo.mycompany.com/repo"
    }
}

You can also have repositories on the local file system. This works for both Maven and Ivy repositories.

Example 24. Usage of a local Ivy directory

build.gradle

repositories {
    ivy {
        // URL can refer to a local directory
        url "../local-repo"
    }
}

A project can have multiple repositories. Gradle will look for a dependency in each repository in the order they are specified, stopping at the first repository that contains the requested module.

To find out more about defining repositories, have a look at Declaring Repositories.

Publishing artifacts

Dependency configurations are also used to publish files.[2] We call these files publication artifacts, or usually just artifacts.

The plugins do a pretty good job of defining the artifacts of a project, so you usually don’t need to do anything special to tell Gradle what needs to be published. However, you do need to tell Gradle where to publish the artifacts. You do this by attaching repositories to the uploadArchives task. Here’s an example of publishing to a remote Ivy repository:

Example 25. Publishing to an Ivy repository

build.gradle

uploadArchives {
    repositories {
        ivy {
            credentials {
                username "username"
                password "pw"
            }
            url "http://repo.mycompany.com"
        }
    }
}

Now, when you run gradle uploadArchives, Gradle will build and upload your Jar. Gradle will also generate and upload an ivy.xml as well.

You can also publish to Maven repositories. The syntax is slightly different.[3] Note that you also need to apply the Maven plugin in order to publish to a Maven repository. when this is in place, Gradle will generate and upload a pom.xml.

Example 26. Publishing to a Maven repository

build.gradle

apply plugin: 'maven'

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://localhost/tmp/myRepo/")
        }
    }
}

To find out more about publication, have a look at Publishing artifacts.

Where to next?

For all the details of dependency resolution, see Introduction to Dependency Management, and for artifact publication see Publishing artifacts.

If you are interested in the DSL elements mentioned here, have a look at Project.configurations{}, Project.repositories{} and Project.dependencies{}.

Otherwise, continue on to some guides.



[2] We think this is confusing, and we are gradually teasing apart the two concepts in the Gradle DSL.

[3] We are working to make the syntax consistent for resolving from and publishing to Maven repositories.

Executing Multi-Project Builds

Only the smallest of projects has a single build file and source tree, unless it happens to be a massive, monolithic application. It’s often much easier to digest and understand a project that has been split into smaller, inter-dependent modules. The word “inter-dependent” is important, though, and is why you typically want to link the modules together through a single build.

Gradle supports this scenario through multi-project builds.

Structure of a multi-project build

Such builds come in all shapes and sizes, but they do have some common characteristics:

  • A settings.gradle file in the root or master directory of the project

  • A build.gradle file in the root or master directory

  • Child directories that have their own *.gradle build files (some multi-project builds may omit child project build scripts)

The settings.gradle file tells Gradle how the project and subprojects are structured. Fortunately, you don’t have to read this file simply to learn what the project structure is as you can run the command gradle projects. Here’s the output from using that command on the Java multiproject build in the Gradle samples:

Example 27. Listing the projects in a build

Output of gradle -q projects

> gradle -q projects

------------------------------------------------------------
Root project
------------------------------------------------------------

Root project 'multiproject'
+--- Project ':api'
+--- Project ':services'
|    +--- Project ':services:shared'
|    \--- Project ':services:webservice'
\--- Project ':shared'

To see a list of the tasks of a project, run gradle <project-path>:tasks
For example, try running gradle :api:tasks

This tells you that multiproject has three immediate child projects: api, services and shared. The services project then has its own children, shared and webservice. These map to the directory structure, so it’s easy to find them. For example, you can find webservice in <root>/services/webservice.

By default, Gradle uses the name of the directory it finds the settings.gradle as the name of the root project. This usually doesn’t cause problems since all developers check out the same directory name when working on a project. On Continuous Integration servers, like Jenkins, the directory name may be auto-generated and not match the name in your VCS. For that reason, it’s recommended that you always set the root project name to something predictable, even in single project builds. You can configure the root project name by setting rootProject.name.

Each project will usually have its own build file, but that’s not necessarily the case. In the above example, the services project is just a container or grouping of other subprojects. There is no build file in the corresponding directory. However, multiproject does have one for the root project.

The root build.gradle is often used to share common configuration between the child projects, for example by applying the same sets of plugins and dependencies to all the child projects. It can also be used to configure individual subprojects when it is preferable to have all the configuration in one place. This means you should always check the root build file when discovering how a particular subproject is being configured.

Another thing to bear in mind is that the build files might not be called build.gradle. Many projects will name the build files after the subproject names, such as api.gradle and services.gradle from the previous example. Such an approach helps a lot in IDEs because it’s tough to work out which build.gradle file out of twenty possibilities is the one you want to open. This little piece of magic is handled by the settings.gradle file, but as a build user you don’t need to know the details of how it’s done. Just have a look through the child project directories to find the files with the .gradle suffix.

Once you know what subprojects are available, the key question for a build user is how to execute the tasks within the project.

Executing a multi-project build

From a user’s perspective, multi-project builds are still collections of tasks you can run. The difference is that you may want to control which project’s tasks get executed. You have two options here:

  • Change to the directory corresponding to the subproject you’re interested in and just execute gradle <task> as normal.

  • Use a qualified task name from any directory, although this is usually done from the root. For example: gradle :services:webservice:build will build the webservice subproject and any subprojects it depends on.

The first approach is similar to the single-project use case, but Gradle works slightly differently in the case of a multi-project build. The command gradle test will execute the test task in any subprojects, relative to the current working directory, that have that task. So if you run the command from the root project directory, you’ll run test in api, shared, services:shared and services:webservice. If you run the command from the services project directory, you’ll only execute the task in services:shared and services:webservice.

For more control over what gets executed, use qualified names (the second approach mentioned). These are paths just like directory paths, but use ‘:’ instead of ‘/’ or ‘\’. If the path begins with a ‘:’, then the path is resolved relative to the root project. In other words, the leading ‘:’ represents the root project itself. All other colons are path separators.

This approach works for any task, so if you want to know what tasks are in a particular subproject, just use the tasks task, e.g. gradle :services:webservice:tasks .

Regardless of which technique you use to execute tasks, Gradle will take care of building any subprojects that the target depends on. You don’t have to worry about the inter-project dependencies yourself. If you’re interested in how this is configured, you can read about writing multi-project builds later in the user guide.

There’s one last thing to note. When you’re using the Gradle wrapper, the first approach doesn’t work well because you have to specify the path to the wrapper script if you’re not in the project root. For example, if you’re in the webservice subproject directory, you would have to run ../../gradlew build.

That’s all you really need to know about multi-project builds as a build user. You can now identify whether a build is a multi-project one and you can discover its structure. And finally, you can execute tasks within specific subprojects.

Continuous build

Continuous build is an incubating feature. This means that it is incomplete and not yet at regular Gradle production quality. This also means that this Gradle User Guide chapter is a work in progress.

Typically, you ask Gradle to perform a single build by way of specifying tasks that Gradle should execute. Gradle will determine the actual set of tasks that need to be executed to satisfy the request, execute them all, and then stop doing work until the next request. A continuous build differs in that Gradle will keep satisfying the initial build request (until instructed to stop) by executing the build when it is detected that the result of the previous build is now out of date. For example, if your build compiles Java source files to Java class files, a continuous build would automatically initiate a compile when the source files change. Continuous build is useful for many scenarios.

How do I start and stop a continuous build?

A continuous build can be started by supplying either the --continuous or -t switches to Gradle, along with the list of tasks, switches and arguments that define the work to do. For example, gradle build --continuous. This will have the same effect as running gradle build, but instead of Gradle exiting when done, it will wait for changes to the build inputs. When a change occurs, gradle build will be automatically executed again and the process repeats.

If Gradle is attached to an interactive input source, such as a terminal, the continuous build can be exited by pressing CTRL-D (On Microsoft Windows, it is required to also press ENTER or RETURN after CTRL-D). If Gradle is not attached to an interactive input source (e.g. is running as part of a script), the build process must be terminated (e.g. using the kill command or similar). If the build is being executed via the Tooling API, the build can be cancelled using the Tooling API’s cancellation mechanism.

What will cause a subsequent build?

Task file inputs

Task implementations declare their file system inputs by annotating their properties with InputFiles and other similar annotations. Please see the section called “Up-to-date checks (AKA Incremental Build)” for more information.

At this time, only changes to task inputs are noticed. Gradle will start watching for changes just before the task starts to execute. No other changes will initiate a build. For example, changes to build scripts and build logic will not initiate build. Likewise, changes to files that are read during the configuration of the build, not the execution, will not initiate a build. In order to incorporate such changes, the continuous build must be restarted manually.

Consider a typical build using the Java plugin, using the conventional filesystem layout. The following diagram visualizes the task graph for gradle build:

Figure 4. Java plugin task graph

Java plugin task graph

The following key tasks of the graph use files in the corresponding directories as inputs:

compileJava

src/main/java

processResources

src/main/resources

compileTestJava

src/test/java

processTestResources

src/test/resources

Assuming that the initial build is successful (i.e. the build task and its dependencies complete without error), changes to files in, or the addition/remove of files from, the locations listed above will initiate a new build. If a change is made to a Java source file in src/main/java, the build will fire and all tasks will be scheduled. Gradle’s incremental build support ensures that only the tasks that are actually affected by the change are executed.

If the change to the main Java source causes compilation to fail, subsequent changes to the test source in src/test/java will not initiate a new build. As the test source depends on the main source, there is no point building until the main source has changed, potentially fixing the compilation error. After each build, only the inputs of the tasks that actually executed will be monitored for changes.

Continuous build is in no way coupled to compilation. It works for all types of tasks. For example, the processResources task copies and processes the files from src/main/resources for inclusion in the built JAR. As such, a change to any file in this directory will also initiate a build.

Limitations and quirks

There are several issues to be aware with the current implementation of continuous build. These are likely to be addressed in future Gradle releases.

Build cycles

Gradle starts watching for changes just before a task executes. If a task modifies its own inputs while executing, Gradle will detect the change and trigger a new build. If every time the task executes, the inputs are modified again, the build will be triggered again. This isn’t unique to continuous build. A task that modifies its own inputs will never be considered up-to-date when run "normally" without continuous build.

If your build enters a build cycle like this, you can track down the task by looking at the list of files reported changed by Gradle. After identifying the file(s) that are changed during each build, you should look for a task that has that file as an input. In some cases, it may be obvious (e.g., a Java file is compiled with compileJava). In other cases, you can use --info logging to find the task that is out-of-date due to the identified files.

Restrictions with Java 9

Due to class access restrictions related to Java 9, Gradle cannot set some operating system specific options, which means that:

  • On macOS, Gradle will poll for file changes every 10 seconds instead of every 2 seconds.

  • On Windows, Gradle must use individual file watches (like on Linux/Mac OS), which may cause continuous build to no longer work on very large projects.

Performance and stability

The JDK file watching facility relies on inefficient file system polling on macOS (see: JDK-7133447). This can significantly delay notification of changes on large projects with many source files.

Additionally, the watching mechanism may deadlock under heavy load on macOS (see: JDK-8079620). This will manifest as Gradle appearing not to notice file changes. If you suspect this is occurring, exit continuous build and start again.

On Linux, OpenJDK’s implementation of the file watch service can sometimes miss file system events (see: JDK-8145981).

Changes to symbolic links

  • Creating or removing symbolic link to files will initiate a build.

  • Modifying the target of a symbolic link will not cause a rebuild.

  • Creating or removing symbolic links to directories will not cause rebuilds.

  • Creating new files in the target directory of a symbolic link will not cause a rebuild.

  • Deleting the target directory will not cause a rebuild.

Changes to build logic are not considered

The current implementation does not recalculate the build model on subsequent builds. This means that changes to task configuration, or any other change to the build model, are effectively ignored.

Composite builds

Composite build is an incubating feature. While useful for many use cases, there are bugs to be discovered, rough edges to smooth, and enhancements we plan to make. Thanks for trying it out!

What is a composite build?

A composite build is simply a build that includes other builds. In many ways a composite build is similar to a Gradle multi-project build, except that instead of including single projects, complete builds are included.

Composite builds allow you to:

  • combine builds that are usually developed independently, for instance when trying out a bug fix in a library that your application uses

  • decompose a large multi-project build into smaller, more isolated chunks that can be worked in independently or together as needed

A build that is included in a composite build is referred to, naturally enough, as an "included build". Included builds do not share any configuration with the composite build, or the other included builds. Each included build is configured and executed in isolation.

Included builds interact with other builds via dependency substitution. If any build in the composite has a dependency that can be satisfied by the included build, then that dependency will be replaced by a project dependency on the included build.

By default, Gradle will attempt to determine the dependencies that can be substituted by an included build. However for more flexibility, it is possible to explicitly declare these substitutions if the default ones determined by Gradle are not correct for the composite. See the section called “Declaring the dependencies substituted by an included build”.

As well as consuming outputs via project dependencies, a composite build can directly declare task dependencies on included builds. Included builds are isolated, and are not able to declare task dependencies on the composite build or on other included builds. See the section called “Depending on tasks in an included build”.

Defining a composite build

The following examples demonstrate the various ways that 2 Gradle builds that are normally developed separately can be combined into a composite build. For these examples, the my-utils multi-project build produces 2 different java libraries (number-utils and string-utils), and the my-app build produces an executable using functions from those libraries.

The my-app build does not have direct dependencies on my-utils. Instead, it declares binary dependencies on the libraries produced by my-utils.

Example 28. Dependencies of my-app

my-app/build.gradle

apply plugin: 'java'
apply plugin: 'application'
apply plugin: 'idea'

group "org.sample"
version "1.0"

mainClassName = "org.sample.myapp.Main"

dependencies {
    compile "org.sample:number-utils:1.0"
    compile "org.sample:string-utils:1.0"
}

repositories {
    jcenter()
}

Note: The code for this example can be found at samples/compositeBuilds/basic in the ‘-all’ distribution of Gradle.


Defining a composite build via --include-build

The --include-build command-line argument turns the executed build into a composite, substituting dependencies from the included build into the executed build.

Example 29. Declaring a command-line composite

Output of gradle --include-build ../my-utils run

> gradle --include-build ../my-utils run
:processResources NO-SOURCE
:my-utils:string-utils:compileJava
:my-utils:string-utils:processResources NO-SOURCE
:my-utils:string-utils:classes
:my-utils:string-utils:jar
:my-utils:number-utils:compileJava
:my-utils:number-utils:processResources NO-SOURCE
:my-utils:number-utils:classes
:my-utils:number-utils:jar
:compileJava
:classes
:run
The answer is 42

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

Defining a composite build via settings.gradle

It’s possible to make the above arrangement persistent, by using Settings.includeBuild(java.lang.Object) to declare the included build in the settings.gradle file. The settings.gradle file can be used to add subprojects and included builds at the same time. Included builds are added by location. See the examples below for more details.

Defining a separate composite build

One downside of the above approach is that it requires you to modify an existing build, rendering it less useful as a standalone build. One way to avoid this is to define a separate composite build, whose only purpose is to combine otherwise separate builds.

Example 30. Declaring a separate composite

settings.gradle

rootProject.name='adhoc'

includeBuild '../my-app'
includeBuild '../my-utils'

In this scenario, the 'main' build that is executed is the composite, and it doesn’t define any useful tasks to execute itself. In order to execute the 'run' task in the 'my-app' build, the composite build must define a delegating task.

Example 31. Depending on task from included build

build.gradle

task run {
    dependsOn gradle.includedBuild('my-app').task(':run')
}

More details tasks that depend on included build tasks below.

Restrictions on included builds

Most builds can be included into a composite, however there are some limitations.

Every included build:

  • must have a settings.gradle file.

  • must not itself be a composite build.

  • must not have a rootProject.name the same as another included build.

  • must not have a rootProject.name the same as a top-level project of the composite build.

  • must not have a rootProject.name the same as the composite build rootProject.name.

Interacting with a composite build

In general, interacting with a composite build is much the same as a regular multi-project build. Tasks can be executed, tests can be run, and builds can be imported into the IDE.

Executing tasks

Tasks from the composite build can be executed from the command line, or from you IDE. Executing a task will result in direct task dependencies being executed, as well as those tasks required to build dependency artifacts from included builds.

There is not (yet) any means to directly execute a task from an included build via the command line. Included build tasks are automatically executed in order to generate required dependency artifacts, or the including build can declare a dependency on a task from an included build.

Importing into the IDE

One of the most useful features of composite builds is IDE integration. By applying the idea or eclipse plugin to your build, it is possible to generate a single IDEA or Eclipse project that permits all builds in the composite to be developed together.

In addition to these Gradle plugins, recent versions of IntelliJ IDEA and Eclipse Buildship support direct import of a composite build.

Importing a composite build permits sources from separate Gradle builds to be easily developed together. For every included build, each sub-project is included as an IDEA Module or Eclipse Project. Source dependencies are configured, providing cross-build navigation and refactoring.

Declaring the dependencies substituted by an included build

By default, Gradle will configure each included build in order to determine the dependencies it can provide. The algorithm for doing this is very simple: Gradle will inspect the group and name for the projects in the included build, and substitute project dependencies for any external dependency matching ${project.group}:${project.name}.

There are cases when the default substitutions determined by Gradle are not sufficient, or they are not correct for a particular composite. For these cases it is possible to explicitly declare the substitutions for an included build. Take for example a single-project build 'unpublished', that produces a java utility library but does not declare a value for the group attribute:

Example 32. Build that does not declare group attribute

build.gradle

apply plugin: 'java'

When this build is included in a composite, it will attempt to substitute for the dependency module "undefined:unpublished" ("undefined" being the default value for project.group, and 'unpublished' being the root project name). Clearly this isn’t going to be very useful in a composite build. To use the unpublished library unmodified in a composite build, the composing build can explicitly declare the substitutions that it provides:

Example 33. Declaring the substitutions for an included build

settings.gradle

rootProject.name = 'app'

includeBuild('../anonymous-library') {
    dependencySubstitution {
        substitute module('org.sample:number-utils') with project(':')
    }
}

With this configuration, the "my-app" composite build will substitute any dependency on org.sample:number-utils with a dependency on the root project of "unpublished".

Cases where included build substitutions must be declared

Many builds that use the uploadArchives task to publish artifacts will function automatically as an included build, without declared substitutions. Here are some common cases where declared substitutions are required:

  • When the archivesBaseName property is used to set the name of the published artifact.

  • When a configuration other than default is published: this usually means a task other than uploadArchives is used.

  • When the MavenPom.addFilter() is used to publish artifacts that don’t match the project name.

  • When the maven-publish or ivy-publish plugins are used for publishing, and the publication coordinates don’t match ${project.group}:${project.name}.

Cases where composite build substitutions won’t work

Some builds won’t function correctly when included in a composite, even when dependency substitutions are explicitly declared. This limitation is due to the fact that a project dependency that is substituted will always point to the default configuration of the target project. Any time that the artifacts and dependencies specified for the default configuration of a project don’t match what is actually published to a repository, then the composite build may exhibit different behaviour.

Here are some cases where the publish module metadata may be different from the project default configuration:

  • When a configuration other than default is published.

  • When the maven-publish or ivy-publish plugins are used.

  • When the POM or ivy.xml file is tweaked as part of publication.

Builds using these features function incorrectly when included in a composite build. We plan to improve this in the future.

Depending on tasks in an included build

While included builds are isolated from one another and cannot declare direct dependencies, a composite build is able to declare task dependencies on its included builds. The included builds are accessed using Gradle.getIncludedBuilds() or Gradle.includedBuild(java.lang.String), and a task reference is obtained via the IncludedBuild.task(java.lang.String) method.

Using these APIs, it is possible to declare a dependency on a task in a particular included build, or tasks with a certain path in all or some of the included builds.

Example 34. Depending on a single task from an included build

build.gradle

task run {
    dependsOn gradle.includedBuild('my-app').task(':run')
}

Example 35. Depending on a tasks with path in all included builds

build.gradle

task publishDeps {
    dependsOn gradle.includedBuilds*.task(':uploadArchives')
}

Current limitations and future plans for composite builds

We think composite builds are pretty useful already. However, there are some things that don’t yet work the way we’d like, and other improvements that we think will make things work even better.

Limitations of the current implementation include:

Improvements we have planned for upcoming releases include:

  • Better detection of dependency substitution, for build that publish with custom coordinates, builds that produce multiple components, etc. This will reduce the cases where dependency substitution needs to be explicitly declared for an included build.

  • The ability to target a task or tasks in an included build directly from the command line. We are currently exploring syntax options for allowing this functionality, which will remove many cases where a delegating task is required in the composite.

  • Making the implicit buildSrc project an included build.

  • Supporting composite-of-composite builds.

Build Environment

Gradle provides multiple mechanisms for configuring behavior of Gradle itself and specific projects. The following is a reference for using these mechanisms.

When configuring Gradle behavior you can use these methods, listed in order of highest to lowest precedence (first one wins):

  • Command-line flags such as --build-cache. These have precedence over properties and environment variables.

  • System properties such as systemProp.http.proxyHost=somehost.org stored in a gradle.properties file.

  • Gradle properties such as org.gradle.caching=true that are typically stored in a gradle.properties file in a project root directory or GRADLE_USER_HOME environment variable.

  • Environment variables such as GRADLE_OPTS sourced by the environment that executes Gradle.

Aside from configuring the build environment, you can configure a given project build using Project properties such as -PreleaseType=final.

Gradle properties

Gradle provides several options that make it easy to configure the Java process that will be used to execute your build. While it’s possible to configure these in your local environment via GRADLE_OPTS or JAVA_OPTS, it is useful to store certain settings like JVM memory configuration and Java home location in version control so that an entire team can work with a consistent environment.

Setting up a consistent environment for your build is as simple as placing these settings into a gradle.properties file. The configuration is applied in following order (if an option is configured in multiple locations the last one wins):

  • gradle.properties in project root directory.

  • gradle.properties in GRADLE_USER_HOME directory.

  • system properties, e.g. when -Dgradle.user.home is set on the command line.

The following properties can be used to configure the Gradle build environment:

org.gradle.caching=(true,false)

When set to true, Gradle will reuse task outputs from any previous build, when possible, resulting is much faster builds. Learn more about using the build cache.

org.gradle.configureondemand=(true,false)

Enables incubating configuration on demand, where Gradle will attempt to configure only necessary projects.

org.gradle.console=(auto,plain,rich,verbose)

Customize console output coloring or verbosity. Default depends on how Gradle is invoked. See command-line logging for additional details.

org.gradle.daemon=(true,false)

When set to true the Gradle Daemon is used to run the build. Default is true.

org.gradle.daemon.idletimeout=(# of idle millis)

Gradle Daemon will terminate itself after specified number of idle milliseconds. Default is 10800000 (3 hours).

org.gradle.debug=(true,false)

When set to true, Gradle will run the build with remote debugging enabled, listening on port 5005. Note that this is the equivalent of adding -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 to the JVM command line and will suspend the virtual machine until a debugger is attached. Default is false.

org.gradle.java.home=(path to JDK home)

Specifies the Java home for the Gradle build process. The value can be set to either a jdk or jre location, however, depending on what your build does, using a JDK is safer. A reasonable default is used if the setting is unspecified.

org.gradle.jvmargs=(JVM arguments)

Specifies the JVM arguments used for the Gradle Daemon. The setting is particularly useful for configuring JVM memory settings for build performance.

org.gradle.logging.level=(quiet,warn,lifecycle,info,debug)

When set to quiet, warn, lifecycle, info, or debug, Gradle will use this log level. The values are not case sensitive. The lifecycle level is the default. See the section called “Choosing a log level”.

org.gradle.parallel=(true,false)

When configured, Gradle will fork up to org.gradle.workers.max JVMs to execute projects in parallel. To learn more about parallel task execution, see the Gradle performance guide.

org.gradle.warning.mode=(all,none,summary)

When set to all, summary or none, Gradle will use different warning type display. See the section called “Logging options” for details.

org.gradle.workers.max=(max # of worker processes)

When configured, Gradle will use a maximum of the given number of workers. Default is number of CPU processors. See also performance command-line options.

The following example demonstrates usage of various properties.

Example 36. Setting properties with a gradle.properties file

gradle.properties

gradlePropertiesProp=gradlePropertiesValue
sysProp=shouldBeOverWrittenBySysProp
envProjectProp=shouldBeOverWrittenByEnvProp
systemProp.system=systemValue

build.gradle

task printProps {
    doLast {
        println commandLineProjectProp
        println gradlePropertiesProp
        println systemProjectProp
        println envProjectProp
        println System.properties['system']
    }
}

Output of gradle -q -PcommandLineProjectProp=commandLineProjectPropValue -Dorg.gradle.project.systemProjectProp=systemPropertyValue printProps

> gradle -q -PcommandLineProjectProp=commandLineProjectPropValue -Dorg.gradle.project.systemProjectProp=systemPropertyValue printProps
commandLineProjectPropValue
gradlePropertiesValue
systemPropertyValue
envPropertyValue
systemValue

System properties

Using the -D command-line option, you can pass a system property to the JVM which runs Gradle. The -D option of the gradle command has the same effect as the -D option of the java command.

You can also set system properties in gradle.properties files with the prefix systemProp.

Example 37. Specifying system properties in gradle.properties

systemProp.gradle.wrapperUser=myuser
systemProp.gradle.wrapperPassword=mypassword

The following system properties are available. Note that command-line options take precedence over system properties.

gradle.wrapperUser=(myuser)

Specify user name to download Gradle distributions from servers using HTTP Basic Authentication. Learn more in the section called “Authenticated Gradle distribution download”.

gradle.wrapperPassword=(mypassword)

Specify password for downloading a Gradle distribution using the Gradle wrapper.

gradle.user.home=(path to directory)

Specify the Gradle user home directory.

In a multi project build, “systemProp.” properties set in any project except the root will be ignored. That is, only the root project’s gradle.properties file will be checked for properties that begin with the “systemProp.” prefix.

Environment variables

The following environment variables are available for the gradle command. Note that command-line options and system properties take precedence over environment variables.

GRADLE_OPTS

Specifies command-line arguments to use when starting the Gradle client. This can be useful for setting the properties to use when running Gradle.

GRADLE_USER_HOME

Specifies the Gradle user home directory (which defaults to $USER_HOME/.gradle if not set).

JAVA_HOME

Specifies the JDK installation directory to use.

Project properties

You can add properties directly to your Project object via the -P command line option.

Gradle can also set project properties when it sees specially-named system properties or environment variables. If the environment variable name looks like ORG_GRADLE_PROJECT_prop=somevalue, then Gradle will set a prop property on your project object, with the value of somevalue. Gradle also supports this for system properties, but with a different naming pattern, which looks like org.gradle.project.prop. Both of the following will set the foo property on your Project object to "bar".

Example 38. Setting a project property via gradle.properties

org.gradle.project.foo=bar

Example 39. Setting a project property via environment variable

ORG_GRADLE_PROJECT_foo=bar

The properties file in the user’s home directory has precedence over property files in the project directories.

This feature is very useful when you don’t have admin rights to a continuous integration server and you need to set property values that should not be easily visible. Since you cannot use the -P option in that scenario, nor change the system-level configuration files, the correct strategy is to change the configuration of your continuous integration build job, adding an environment variable setting that matches an expected pattern. This won’t be visible to normal users on the system.

You can access a project property in your build script simply by using its name as you would use a variable.

If a project property is referenced but does not exist, an exception will be thrown and the build will fail.

You should check for existence of optional project properties before you access them using the Project.hasProperty(java.lang.String) method.

Configuring JVM memory

Gradle defaults to 1024 megabytes maximum heap per JVM process (-Xmx1024m), however, that may be too much or too little depending on the size of your project. There are many JVM options (this blog post on Java performance tuning and this reference may be helpful).

You can adjust JVM options for Gradle in the following ways:

The JAVA_OPTS environment variable is used for the Gradle client, but not forked JVMs.

Example 40. Changing JVM settings for Gradle client JVM

JAVA_OPTS="-Xmx2g -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"

You need to use the org.gradle.jvmargs Gradle property to configure JVM settings for the Gradle Daemon.

Example 41. Changing JVM settings for forked Gradle JVMs

org.gradle.jvmargs=-Xmx2g -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8

Many settings (like the Java version and maximum heap size) can only be specified when launching a new JVM for the build process. This means that Gradle must launch a separate JVM process to execute the build after parsing the various gradle.properties files.

When running with the Gradle Daemon, a JVM with the correct parameters is started once and reused for each daemon build execution. When Gradle is executed without the daemon, then a new JVM must be launched for every build execution, unless the JVM launched by the Gradle start script happens to have the same parameters.

Certain tasks in Gradle also fork additional JVM processes, like the test task when using Test.setMaxParallelForks(int) for JUnit or TestNG tests. You must configure these through the tasks themselves.

Example 42. Set Java compile options for JavaCompile tasks

apply plugin: "java"

tasks.withType(JavaCompile) {
    options.compilerArgs += ["-Xdoclint:none", "-Xlint:none", "-nowarn"]
}

See other examples in the Test API documentation and test execution in the Java plugin reference.

Build scans will tell you information about the JVM that executed the build when you use the --scan option.

Build Environment in build scan

Configuring a task using project properties

It’s possible to change the behavior of a task based on project properties specified at invocation time.

Suppose you’d like to ensure release builds are only triggered by CI. A simple way to handle this is through an isCI project property.

Example 43. Prevent releasing outside of CI

build.gradle

task performRelease {
    doLast {
        if (project.hasProperty("isCI")) {
            println("Performing release actions")
        } else {
            throw new InvalidUserDataException("Cannot perform release outside of CI")
        }
    }
}

Output of gradle performRelease -PisCI=true --quiet

> gradle performRelease -PisCI=true --quiet
Performing release actions

Accessing the web through a HTTP proxy

Configuring an HTTP or HTTPS proxy (for downloading dependencies, for example) is done via standard JVM system properties. These properties can be set directly in the build script; for example, setting the HTTP proxy host would be done with System.setProperty('http.proxyHost', 'www.somehost.org'). Alternatively, the properties can be specified in gradle.properties.

Example 44. Configuring an HTTP proxy using gradle.properties

systemProp.http.proxyHost=www.somehost.org
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=userid
systemProp.http.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost

There are separate settings for HTTPS.

Example 45. Configuring an HTTPS proxy using gradle.properties

systemProp.https.proxyHost=www.somehost.org
systemProp.https.proxyPort=8080
systemProp.https.proxyUser=userid
systemProp.https.proxyPassword=password
systemProp.https.nonProxyHosts=*.nonproxyrepos.com|localhost

You may need to set other properties to access other networks. Here are 2 references that may be helpful:

NTLM Authentication

If your proxy requires NTLM authentication, you may need to provide the authentication domain as well as the username and password. There are 2 ways that you can provide the domain for authenticating to a NTLM proxy:

  • Set the http.proxyUser system property to a value like domain/username.

  • Provide the authentication domain via the http.auth.ntlm.domain system property.

Troubleshooting

This chapter is currently a work in progress.

When using Gradle (or any software package), you can run into problems. You may not understand how to use a particular feature, or you may encounter a defect. Or, you may have a general question about Gradle.

This chapter gives some advice for troubleshooting problems and explains how to get help with your problems.

Working through problems

If you are encountering problems, one of the first things to try is using the very latest release of Gradle. New versions of Gradle are released frequently with bug fixes and new features. The problem you are having may have been fixed in a new release.

If you are using the Gradle Daemon, try temporarily disabling the daemon (you can pass the command line switch --no-daemon). More information about troubleshooting the daemon process is located in The Gradle Daemon.

Getting help

The place to go for help with Gradle is http://forums.gradle.org. The Gradle Forums is the place where you can report problems and ask questions of the Gradle developers and other community members.

If something’s not working for you, posting a question or problem report to the forums is the fastest way to get help. It’s also the place to post improvement suggestions or new ideas. The development team frequently posts news items and announces releases via the forum, making it a great way to stay up to date with the latest Gradle developments.

Embedding Gradle using the Tooling API

Introduction to the Tooling API

Gradle provides a programmatic API called the Tooling API, which you can use for embedding Gradle into your own software. This API allows you to execute and monitor builds and to query Gradle about the details of a build. The main audience for this API is IDE, CI server, other UI authors; however, the API is open for anyone who needs to embed Gradle in their application.

  • Gradle TestKit uses the Tooling API for functional testing of your Gradle plugins.

  • Eclipse Buildship uses the Tooling API for importing your Gradle project and running tasks.

  • IntelliJ IDEA uses the Tooling API for importing your Gradle project and running tasks.

Tooling API Features

A fundamental characteristic of the Tooling API is that it operates in a version independent way. This means that you can use the same API to work with builds that use different versions of Gradle, including versions that are newer or older than the version of the Tooling API that you are using. The Tooling API is Gradle wrapper aware and, by default, uses the same Gradle version as that used by the wrapper-powered build.

Some features that the Tooling API provides:

  • Query the details of a build, including the project hierarchy and the project dependencies, external dependencies (including source and Javadoc jars), source directories and tasks of each project.

  • Execute a build and listen to stdout and stderr logging and progress messages (e.g. the messages shown in the 'status bar' when you run on the command line).

  • Execute a specific test class or test method.

  • Receive interesting events as a build executes, such as project configuration, task execution or test execution.

  • Cancel a build that is running.

  • Combine multiple separate Gradle builds into a single composite build.

  • The Tooling API can download and install the appropriate Gradle version, similar to the wrapper.

  • The implementation is lightweight, with only a small number of dependencies. It is also a well-behaved library, and makes no assumptions about your classloader structure or logging configuration. This makes the API easy to embed in your application.

Tooling API and the Gradle Build Daemon

The Tooling API always uses the Gradle daemon. This means that subsequent calls to the Tooling API, be it model building requests or task executing requests will be executed in the same long-living process. The Gradle Daemon contains more details about the daemon, specifically information on situations when new daemons are forked.

Quickstart

As the Tooling API is an interface for developers, the Javadoc is the main documentation for it. We provide several samples that live in samples/toolingApi in your Gradle distribution. These samples specify all of the required dependencies for the Tooling API with examples for querying information from Gradle builds and executing tasks from the Tooling API.

To use the Tooling API, add the following repository and dependency declarations to your build script:

Example 46. Using the tooling API

build.gradle

repositories {
    maven { url 'https://repo.gradle.org/gradle/libs-releases' }
}

dependencies {
    compile "org.gradle:gradle-tooling-api:${toolingApiVersion}"
    // The tooling API need an SLF4J implementation available at runtime, replace this with any other implementation
    runtime 'org.slf4j:slf4j-simple:1.7.10'
}

The main entry point to the Tooling API is the GradleConnector. You can navigate from there to find code samples and explore the available Tooling API models. You can use GradleConnector.connect() to create a ProjectConnection. A ProjectConnection connects to a single Gradle project. Using the connection you can execute tasks, tests and retrieve models relative to this project.

Gradle version and Java version compatibility

Provider side

The current version of Tooling API supports running builds using Gradle versions 1.2 and later. However, support for running builds with Gradle versions older than 2.6 is deprecated and will be removed in Tooling API version 5.0.

Consumer side

The current version of Gradle supports running builds via Tooling API versions 2.0 and later. However, support for running builds via Tooling API versions older than 3.0 is deprecated and will be removed in Gradle 5.0.

You should note that not all features of the Tooling API are available for all versions of Gradle. For example, build cancellation is only available when a build uses Gradle 2.1 and later. Refer to the documentation for each class and method for more details.

Java version

The Tooling API requires Java 8 or later. Java 7 is currently still supported but will be removed in Gradle 5.0. The Gradle version used by builds may have additional Java version requirements.

Build Cache

The build cache feature is ready to be used for Java, Groovy and Scala projects. Work continues to make it available in more areas.

The build cache feature described here is different from the Android plugin build cache.

Overview

The Gradle build cache is a cache mechanism that aims to save time by reusing outputs produced by other builds. The build cache works by storing (locally or remotely) build outputs and allowing builds to fetch these outputs from the cache when it is determined that inputs have not changed, avoiding the expensive work of regenerating them.

A first feature using the build cache is task output caching. Essentially, task output caching leverages the same intelligence as up-to-date checks that Gradle uses to avoid work when a previous local build has already produced a set of task outputs. But instead of being limited to the previous build in the same workspace, task output caching allows Gradle to reuse task outputs from any earlier build in any location on the local machine. When using a shared build cache for task output caching this even works across developer machines and build agents.

Apart from task output caching, we expect other features to use the build cache in the future.

A complete guide is available about using the build cache. It covers the different scenarios caching can improve, and detailed discussions of the different caveats you need to be aware of when enabling caching for a build.

Enable the Build Cache

By default, the build cache is not enabled. You can enable the build cache in a couple of ways:

Run with --build-cache on the command-line

Gradle will use the build cache for this build only.

Put org.gradle.caching=true in your gradle.properties

Gradle will try to reuse outputs from previous builds for all builds, unless explicitly disabled with --no-build-cache.

When the build cache is enabled, it will store build outputs in the Gradle user home. For configuring this directory or different kinds of build caches see the section called “Configure the Build Cache”.

Task Output Caching

Beyond incremental builds described in the section called “Up-to-date checks (AKA Incremental Build)”, Gradle can save time by reusing outputs from previous executions of a task by matching inputs to the task. Task outputs can be reused between builds on one computer or even between builds running on different computers via a build cache.

We have focused on the use case where users have an organization-wide remote build cache that is populated regularly by continuous integration builds. Developers and other continuous integration agents should pull cache entries from the remote build cache. We expect that developers will not be allowed to populate the remote build cache, and all continuous integration builds populate the build cache after running the clean task.

For your build to play well with task output caching it must work well with the incremental build feature. For example, when running your build twice in a row all tasks with outputs should be UP-TO-DATE. You cannot expect faster builds or correct builds when enabling task output caching when this prerequisite is not met.

Task output caching is automatically enabled when you enable the build cache, see the section called “Enable the Build Cache”.

What does it look like

Let us start with a project using the Java plugin which has a few Java source files. We run the build the first time.

$> gradle --build-cache compileJava
Build cache is an incubating feature.
Using local directory build cache for the root build (location = /home/user/.gradle/caches/build-cache-1).
:compileJava
:processResources
:classes
:jar
:assemble

BUILD SUCCESSFUL

We see the directory used by the local build cache in the output. Apart from that the build was the same as without the build cache. Let’s clean and run the build again.

$> gradle clean
:clean

BUILD SUCCESSFUL
$> gradle --build-cache assemble
Build cache is an incubating feature.
Using local directory build cache for the root build (location = /home/user/.gradle/caches/build-cache-1).
:compileJava FROM-CACHE
:processResources
:classes
:jar
:assemble

BUILD SUCCESSFUL

Now we see that, instead of executing the :compileJava task, the outputs of the task have been loaded from the build cache. The other tasks have not been loaded from the build cache since they are not cacheable. This is due to :classes and :assemble being lifecycle tasks and :processResources and :jar being Copy-like tasks which are not cacheable since it is generally faster to execute them.

Cacheable tasks

Since a task describes all of its inputs and outputs, Gradle can compute a build cache key that uniquely defines the task’s outputs based on its inputs. That build cache key is used to request previous outputs from a build cache or push new outputs to the build cache. If the previous build is already populated by someone else, e.g. your continuous integration server or other developers, you can avoid executing most tasks locally.

The following inputs contribute to the build cache key for a task in the same way that they do for up-to-date checks:

  • The task type and its classpath

  • The names of the output properties

  • The names and values of properties annotated as described in the section called “Custom task types”

  • The names and values of properties added by the DSL via TaskInputs

  • The classpath of the Gradle distribution, buildSrc and plugins

  • The content of the build script when it affects execution of the task

Task types need to opt-in to task output caching using the @CacheableTask annotation. Note that @CacheableTask is not inherited by subclasses. Custom task types are not cacheable by default.

Built-in cacheable tasks

Currently, the following built-in Gradle tasks are cacheable:

Non-cacheable tasks

All other tasks are currently not cacheable, but this may change in the future for other languages (Kotlin) or domains (native, Android, Play). Some tasks, like Copy or Jar, usually do not make sense to make cacheable because Gradle is only copying files from one location to another. It also doesn’t make sense to make tasks cacheable that do not produce outputs or have no task actions.

Declaring task inputs and outputs

It is very important that a cacheable task has a complete picture of its inputs and outputs, so that the results from one build can be safely re-used somewhere else.

Missing task inputs can cause incorrect cache hits, where different results are treated as identical because the same cache key is used by both executions. Missing task outputs can cause build failures if Gradle does not completely capture all outputs for a given task. Wrongly declared task inputs can lead to cache misses especially when containing volatile data or absolute paths. (See the section called “Task inputs and outputs” on what should be declared as inputs and outputs.)

The task path is not an input to the build cache key. This means that tasks with different task paths can re-use each other’s outputs as long as Gradle determines that executing them yields the same result.

In order to ensure that the inputs and outputs are properly declared use integration tests (for example using TestKit) to check that a task produces the same outputs for identical inputs and captures all output files for the task. We suggest adding tests to ensure that the task inputs are relocatable, i.e. that the task can be loaded from the cache into a different build directory (see @PathSensitive).

In order to handle volatile inputs for your tasks consider configuring input normalization.

Known issues with task output caching

The task output caching feature has known issues that may impact the correctness of your build when using the build cache, and there are some caveats to keep in mind which may reduce the number of cache hits you get between machines. These issues will be corrected as this feature becomes stable.

Note that task output caching relies on incremental build. Problems that affect incremental builds can also affect task output caching even if the affected tasks are not cacheable. Most issues only cause problems if your build cache is populated by non-clean builds or if caching has been enabled for unsupported tasks. For a current list of open problems with incremental builds see these Github issues.

When reporting issues with the build cache, please check if your issue is a known issue or related to a known issue.

Correctness issues

These issues may affect the correctness of your build when using the build cache. Please consider these issues carefully.

Table 1. Correctness issues

Description Impact Workaround

Tracking the Java vendor implementation

Gradle currently tracks the major version of Java that is used for compilation and test execution. If your build uses several Java implementations (IBM, OpenJDK, Oracle, etc) that are the same major version, Gradle will treat them all as equivalent and re-use outputs from any implementation.

Only enable caching for builds that all use the same Java implementation or manually add the Java vendor as an input to compilation and test execution tasks by using the runtime api for adding task inputs.

Tracking the Java version

Gradle currently tracks the major version of Java (6 vs 7 vs 8) that is used for compilation and test execution. If your build expects to use several minor releases (1.8.0_102 vs 1.8.0_25), Gradle will treat all of these as equivalent and re-use outputs from any minor version. In our experience, bytecode produced by each major version is functionally equivalent.

Manually add the full Java version as an input to compilation and test execution tasks by using the runtime api for adding task inputs.

Environment variables are not tracked as inputs.

For tasks that fork processes (like Test), Gradle does not track any of the environment variables visible to the process. This can allow undeclared inputs to affect the outputs of the task.

Declare environment variables as inputs to the task with TaskInputs.property(java.lang.String, java.lang.Object).

Changes in Gradle’s file encoding that affect the build script

Gradle can produce different task output based on the file encoding used by the JVM. Gradle will use a default file encoding based on the operating system if file.encoding is not explicitly set.

Set the UTF-8 file encoding on all tasks which allow setting the encoding. Use UTF-8 file encoding everywhere by setting file.encoding to UTF-8 for the Gradle JVM.

Javadoc ignores custom command-line options

Gradle’s Javadoc task does not take into account any changes to custom command-line options.

You can add your custom options as input properties or disable caching of Javadoc.

Caveats

These issues may affect the number of cache hits you get between machines.

Table 2. Caveats

Description Impact Workaround

Overlapping outputs between tasks

If two or more tasks share an output directory or files, Gradle will disable caching for these tasks when it detects an overlap.

Use separate output directories for each task.

Using cached C/C++ object files with absolute paths

When Gradle compiles C/C++ code, object files tend to have absolute paths embedded inside them. This doesn’t affect their correctness, but it can interfere with debuggers that search for source code at those absolute paths.

Build the project from the same absolute path on every machine.

Line endings in build scripts files.

Gradle calculates the build cache key based on the MD5 hash of the build script contents. If the line endings are different between developers and the CI servers, Gradle will calculate different build cache keys even when all other inputs to a task are the same.

Check if your VCS will change source file line endings and configure it to have a consistent line ending across all platforms.

Absolute paths in command-line arguments and system properties.

Gradle provides ways of specifying the path sensitivity for individual task properties (see @PathSensitive); however, it is common to need to pass absolute paths to tools or to tests via system properties or command line arguments. These kinds of inputs will cause cache misses because not every developer or CI server uses an identical absolute path to the root of a build. Tasks like Test include system properties and JVM arguments as inputs to the build cache key.

If possible, use relative paths (via Project.relativePath(java.lang.Object)). Further tooling will be provided later.

Using JaCoCo disables caching of the Test task.

The JaCoCo agent relies on appending to a shared output file that may be left over from a different test execution. If Gradle allowed Test tasks to be cacheable with the JaCoCo plugin, it could not guarantee the same results each time.

None.

Adding new actions to cacheable tasks in a build file makes that task sensitive to unrelated changes to the build file.

Actions added by a plugin (from buildSrc or externally) do not have this problem because their classloader is restricted to the classpath of the plugin.

Avoid adding actions to cacheable tasks in a build file.

Modifying inputs or outputs during task execution.

It’s possible to modify a task’s inputs or outputs during execution in ways that change the output of a task. This breaks incremental builds and can cause problems with the build cache.

Use a configure task to finalize configuration for a given task. A configure task configures another task as part of its execution.

Order of input files affects outputs.

Some tools are sensitive to the order of its inputs and will produce slightly different output. Gradle will usually provide the order of files from the filesystem, which will be different across operating systems.

Provide a stable order for tools affected by order.

ANTLR3 produces output with a timestamp.

When generating Java source code with ANTLR3 and the The ANTLR Plugin, the generated sources contain a timestamp that reduces how often Java compilation will be cached. ANTLR2 and ANTLR4 are not affected.

If you cannot upgrade to ANLTR4 use a custom template or remove the timestamp in a doLast action.

Configure the Build Cache

You can configure the build cache by using the Settings.buildCache(org.gradle.api.Action) block in settings.gradle.

Gradle supports a local and a remote build cache that can be configured separately. When both build caches are enabled, Gradle tries to load build outputs from the local build cache first, and then tries the remote build cache if no build outputs are found. If outputs are found in the remote cache, they are also stored in the local cache, so next time they will be found locally. Gradle pushes build outputs to any build cache that is enabled and has BuildCache.isPush() set to true.

By default, the local build cache has push enabled, and the remote build cache has push disabled.

The local build cache is pre-configured to be a DirectoryBuildCache and enabled by default. The remote build cache can be configured by specifying the type of build cache to connect to (BuildCacheConfiguration.remote(java.lang.Class)).

Built-in local build cache

The built-in local build cache, DirectoryBuildCache, uses a directory to store build cache artifacts. By default, this directory resides in the Gradle user home directory, but its location is configurable.

Gradle will periodically clean-up the local cache directory to reduce it to a configurable target size. This means that the local build cache directory may temporarily grow over the target size until the next clean-up is scheduled.

For more details on the configuration options refer to the DSL documentation of DirectoryBuildCache. Here is an example of the configuration.

Example 47. Configure the local cache

settings.gradle

buildCache {
    local(DirectoryBuildCache) {
        directory = new File(rootDir, 'build-cache')
        targetSizeInMB = 1024
    }
}

Remote HTTP build cache

Gradle has built-in support for connecting to a remote build cache backend via HTTP. For more details on what the protocol looks like see HttpBuildCache. Note that by using the following configuration the local build cache will be used for storing build outputs while the local and the remote build cache will be used for retrieving build outputs.

Example 48. Pull from HttpBuildCache

settings.gradle

buildCache {
    remote(HttpBuildCache) {
        url = 'https://example.com:8123/cache/'
    }
}

You can configure the credentials the HttpBuildCache uses to access the build cache server as shown in the following example.

Example 49. Configure remote HTTP cache

settings.gradle

buildCache {
    remote(HttpBuildCache) {
        url = 'http://example.com:8123/cache/'
        credentials {
            username = 'build-cache-user'
            password = 'some-complicated-password'
        }
    }
}

You may encounter problems with an untrusted SSL certificate when you try to use a build cache backend with an HTTPS URL. The ideal solution is for someone to add a valid SSL certificate to the build cache backend, but we recognize that you may not be able to do that. In that case, set HttpBuildCache.isAllowUntrustedServer() to true:

Example 50. Allow untrusted SSL certificate for HttpBuildCache

settings.gradle

buildCache {
    remote(HttpBuildCache) {
        url = 'https://example.com:8123/cache/'
        allowUntrustedServer = true
    }
}

This is a convenient workaround, but you shouldn’t use it as a long-term solution.

Configuration use cases

The recommended use case for the build cache is that your continuous integration server populates the remote build cache with clean builds while developers pull from the remote build cache and push to a local build cache. The configuration would then look as follows.

Example 51. Recommended setup for CI push use case

settings.gradle

ext.isCiServer = System.getenv().containsKey("CI")

buildCache {
    local {
        enabled = !isCiServer
    }
    remote(HttpBuildCache) {
        url = 'https://example.com:8123/cache/'
        push = isCiServer
    }
}

If you use a buildSrc directory, you should make sure that it uses the same build cache configuration as the main build. This can be achieved by applying the same script to buildSrc/settings.gradle and settings.gradle as shown in the following example.

Example 52. Consistent setup for buildSrc and main build

settings.gradle

apply from: new File(settingsDir, 'gradle/buildCacheSettings.gradle')

buildSrc/settings.gradle

apply from: new File(settingsDir, '../gradle/buildCacheSettings.gradle')

gradle/buildCacheSettings.gradle

ext.isCiServer = System.getenv().containsKey("CI")

buildCache {
    local {
        enabled = !isCiServer
    }
    remote(HttpBuildCache) {
        url = 'https://example.com:8123/cache/'
        push = isCiServer
    }
}

It is also possible to configure the build cache from an init script, which can be used from the command line, added to your Gradle user home or be a part of your custom Gradle distribution.

Example 53. Init script to configure the build cache

init.gradle

gradle.settingsEvaluated { settings ->
    settings.buildCache {
        // vvv Your custom configuration goes here
        remote(HttpBuildCache) {
            url = 'https://example.com:8123/cache/'
        }
        // ^^^ Your custom configuration goes here
    }
}

Build cache and composite builds

Gradle’s composite build feature allows including other complete Gradle builds into another. Such included builds will inherit the build cache configuration from the top level build, regardless of whether the included builds define build cache configuration themselves or not.

The build cache configuration present for any included build is effectively ignored, in favour of the top level build’s configuration. This also applies to any buildSrc projects of any included builds.

How to set up an HTTP build cache backend

Gradle provides a Docker image for a build cache node, which can connect with Gradle Enterprise for centralized management. The cache node can also be used without a Gradle Enterprise installation with restricted functionality.

Implement your own Build Cache

Using a different build cache backend to store build outputs (which is not covered by the built-in support for connecting to an HTTP backend) requires implementing your own logic for connecting to your custom build cache backend. To this end, custom build cache types can be registered via BuildCacheConfiguration.registerBuildCacheService(java.lang.Class, java.lang.Class). For an example of what this could look like see the Gradle Hazelcast plugin.

Gradle Enterprise includes a high-performance, easy to install and operate, shared build cache backend.

Writing Gradle build scripts

Build Script Basics

Projects and tasks

Everything in Gradle sits on top of two basic concepts: projects and tasks.

Every Gradle build is made up of one or more projects. What a project represents depends on what it is that you are doing with Gradle. For example, a project might represent a library JAR or a web application. It might represent a distribution ZIP assembled from the JARs produced by other projects. A project does not necessarily represent a thing to be built. It might represent a thing to be done, such as deploying your application to staging or production environments. Don’t worry if this seems a little vague for now. Gradle’s build-by-convention support adds a more concrete definition for what a project is.

Each project is made up of one or more tasks. A task represents some atomic piece of work which a build performs. This might be compiling some classes, creating a JAR, generating Javadoc, or publishing some archives to a repository.

For now, we will look at defining some simple tasks in a build with one project. Later chapters will look at working with multiple projects and more about working with projects and tasks.

Hello world

You run a Gradle build using the gradle command. The gradle command looks for a file called build.gradle in the current directory.[4] We call this build.gradle file a build script, although strictly speaking it is a build configuration script, as we will see later. The build script defines a project and its tasks.

To try this out, create the following build script named build.gradle.

Example 54. Your first build script

build.gradle

task hello {
    doLast {
        println 'Hello world!'
    }
}

In a command-line shell, move to the containing directory and execute the build script with gradle -q hello:

What does -q do?

Most of the examples in this user guide are run with the -q command-line option. This suppresses Gradle’s log messages, so that only the output of the tasks is shown. This keeps the example output in this user guide a little clearer. You don’t need to use this option if you don’t want to. See Logging for more details about the command-line options which affect Gradle’s output.

Example 55. Execution of a build script

Output of gradle -q hello

> gradle -q hello
Hello world!

What’s going on here? This build script defines a single task, called hello, and adds an action to it. When you run gradle hello, Gradle executes the hello task, which in turn executes the action you’ve provided. The action is simply a closure containing some Groovy code to execute.

If you think this looks similar to Ant’s targets, you would be right. Gradle tasks are the equivalent to Ant targets, but as you will see, they are much more powerful. We have used a different terminology than Ant as we think the word task is more expressive than the word target. Unfortunately this introduces a terminology clash with Ant, as Ant calls its commands, such as javac or copy, tasks. So when we talk about tasks, we always mean Gradle tasks, which are the equivalent to Ant’s targets. If we talk about Ant tasks (Ant commands), we explicitly say Ant task.

A shortcut task definition

This functionality is deprecated and will be removed in Gradle 5.0 without replacement. Use the methods Task.doFirst(org.gradle.api.Action) and Task.doLast(org.gradle.api.Action) to define an action instead, as demonstrated by the rest of the examples in this chapter.

There is a shorthand way to define a task like our hello task above, which is more concise.

Example 56. A task definition shortcut

build.gradle

task hello << {
    println 'Hello world!'
}

Again, this defines a task called hello with a single closure to execute. The << operator is simply an alias for doLast.

Build scripts are code

Gradle’s build scripts give you the full power of Groovy. As an appetizer, have a look at this:

Example 57. Using Groovy in Gradle's tasks

build.gradle

task upper {
    doLast {
        String someString = 'mY_nAmE'
        println "Original: " + someString
        println "Upper case: " + someString.toUpperCase()
    }
}

Output of gradle -q upper

> gradle -q upper
Original: mY_nAmE
Upper case: MY_NAME

or

Example 58. Using Groovy in Gradle's tasks

build.gradle

task count {
    doLast {
        4.times { print "$it " }
    }
}

Output of gradle -q count

> gradle -q count
0 1 2 3 

Task dependencies

As you probably have guessed, you can declare tasks that depend on other tasks.

Example 59. Declaration of task that depends on other task

build.gradle

task hello {
    doLast {
        println 'Hello world!'
    }
}
task intro(dependsOn: hello) {
    doLast {
        println "I'm Gradle"
    }
}

Output of gradle -q intro

> gradle -q intro
Hello world!
I'm Gradle

To add a dependency, the corresponding task does not need to exist.

Example 60. Lazy dependsOn - the other task does not exist (yet)

build.gradle

task taskX(dependsOn: 'taskY') {
    doLast {
        println 'taskX'
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}

Output of gradle -q taskX

> gradle -q taskX
taskY
taskX

The dependency of taskX to taskY is declared before taskY is defined. This is very important for multi-project builds. Task dependencies are discussed in more detail in the section called “Adding dependencies to a task”.

Please notice that you can’t use shortcut notation (see the section called “Shortcut notations”) when referring to a task that is not yet defined.

Dynamic tasks

The power of Groovy can be used for more than defining what a task does. For example, you can also use it to dynamically create tasks.

Example 61. Dynamic creation of a task

build.gradle

4.times { counter ->
    task "task$counter" {
        doLast {
            println "I'm task number $counter"
        }
    }
}

Output of gradle -q task1

> gradle -q task1
I'm task number 1

Manipulating existing tasks

Once tasks are created they can be accessed via an API. For instance, you could use this to dynamically add dependencies to a task, at runtime. Ant doesn’t allow anything like this.

Example 62. Accessing a task via API - adding a dependency

build.gradle

4.times { counter ->
    task "task$counter" {
        doLast {
            println "I'm task number $counter"
        }
    }
}
task0.dependsOn task2, task3

Output of gradle -q task0

> gradle -q task0
I'm task number 2
I'm task number 3
I'm task number 0

Or you can add behavior to an existing task.

Example 63. Accessing a task via API - adding behaviour

build.gradle

task hello {
    doLast {
        println 'Hello Earth'
    }
}
hello.doFirst {
    println 'Hello Venus'
}
hello.doLast {
    println 'Hello Mars'
}
hello {
    doLast {
        println 'Hello Jupiter'
    }
}

Output of gradle -q hello

> gradle -q hello
Hello Venus
Hello Earth
Hello Mars
Hello Jupiter

The calls doFirst and doLast can be executed multiple times. They add an action to the beginning or the end of the task’s actions list. When the task executes, the actions in the action list are executed in order.

Shortcut notations

There is a convenient notation for accessing an existing task. Each task is available as a property of the build script:

Example 64. Accessing task as a property of the build script

build.gradle

task hello {
    doLast {
        println 'Hello world!'
    }
}
hello.doLast {
    println "Greetings from the $hello.name task."
}

Output of gradle -q hello

> gradle -q hello
Hello world!
Greetings from the hello task.

This enables very readable code, especially when using the tasks provided by the plugins, like the compile task.

Extra task properties

You can add your own properties to a task. To add a property named myProperty, set ext.myProperty to an initial value. From that point on, the property can be read and set like a predefined task property.

Example 65. Adding extra properties to a task

build.gradle

task myTask {
    ext.myProperty = "myValue"
}

task printTaskProperties {
    doLast {
        println myTask.myProperty
    }
}

Output of gradle -q printTaskProperties

> gradle -q printTaskProperties
myValue

Extra properties aren’t limited to tasks. You can read more about them in the section called “Extra properties”.

Using Ant Tasks

Ant tasks are first-class citizens in Gradle. Gradle provides excellent integration for Ant tasks by simply relying on Groovy. Groovy is shipped with the fantastic AntBuilder. Using Ant tasks from Gradle is as convenient and more powerful than using Ant tasks from a build.xml file. From the example below, you can learn how to execute Ant tasks and how to access Ant properties:

Example 66. Using AntBuilder to execute ant.loadfile target

build.gradle

task loadfile {
    doLast {
        def files = file('../antLoadfileResources').listFiles().sort()
        files.each { File file ->
            if (file.isFile()) {
                ant.loadfile(srcFile: file, property: file.name)
                println " *** $file.name ***"
                println "${ant.properties[file.name]}"
            }
        }
    }
}

Output of gradle -q loadfile

> gradle -q loadfile
 *** agile.manifesto.txt ***
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration  over contract negotiation
Responding to change over following a plan
 *** gradle.manifesto.txt ***
Make the impossible possible, make the possible easy and make the easy elegant.
(inspired by Moshe Feldenkrais)

There is lots more you can do with Ant in your build scripts. You can find out more in Using Ant from Gradle.

Using methods

Gradle scales in how you can organize your build logic. The first level of organizing your build logic for the example above, is extracting a method.

Example 67. Using methods to organize your build logic

build.gradle

task checksum {
    doLast {
        fileList('../antLoadfileResources').each { File file ->
            ant.checksum(file: file, property: "cs_$file.name")
            println "$file.name Checksum: ${ant.properties["cs_$file.name"]}"
        }
    }
}

task loadfile {
    doLast {
        fileList('../antLoadfileResources').each { File file ->
            ant.loadfile(srcFile: file, property: file.name)
            println "I'm fond of $file.name"
        }
    }
}

File[] fileList(String dir) {
    file(dir).listFiles({file -> file.isFile() } as FileFilter).sort()
}

Output of gradle -q loadfile

> gradle -q loadfile
I'm fond of agile.manifesto.txt
I'm fond of gradle.manifesto.txt

Later you will see that such methods can be shared among subprojects in multi-project builds. If your build logic becomes more complex, Gradle offers you other very convenient ways to organize it. We have devoted a whole chapter to this. See Organizing Build Logic.

Default tasks

Gradle allows you to define one or more default tasks that are executed if no other tasks are specified.

Example 68. Defining a default task

build.gradle

defaultTasks 'clean', 'run'

task clean {
    doLast {
        println 'Default Cleaning!'
    }
}

task run {
    doLast {
        println 'Default Running!'
    }
}

task other {
    doLast {
        println "I'm not a default task!"
    }
}

Output of gradle -q

> gradle -q
Default Cleaning!
Default Running!

This is equivalent to running gradle clean run. In a multi-project build every subproject can have its own specific default tasks. If a subproject does not specify default tasks, the default tasks of the parent project are used (if defined).

Configure by DAG

As we later describe in full detail (see Build Lifecycle), Gradle has a configuration phase and an execution phase. After the configuration phase, Gradle knows all tasks that should be executed. Gradle offers you a hook to make use of this information. A use-case for this would be to check if the release task is among the tasks to be executed. Depending on this, you can assign different values to some variables.

In the following example, execution of the distribution and release tasks results in different value of the version variable.

Example 69. Different outcomes of build depending on chosen tasks

build.gradle

task distribution {
    doLast {
        println "We build the zip with version=$version"
    }
}

task release(dependsOn: 'distribution') {
    doLast {
        println 'We release now'
    }
}

gradle.taskGraph.whenReady {taskGraph ->
    if (taskGraph.hasTask(release)) {
        version = '1.0'
    } else {
        version = '1.0-SNAPSHOT'
    }
}

Output of gradle -q distribution

> gradle -q distribution
We build the zip with version=1.0-SNAPSHOT

Output of gradle -q release

> gradle -q release
We build the zip with version=1.0
We release now

The important thing is that whenReady affects the release task before the release task is executed. This works even when the release task is not the primary task (i.e., the task passed to the gradle command).

Where to next?

In this chapter, we have had a first look at tasks. But this is not the end of the story for tasks. If you want to jump into more of the details, have a look at Authoring Tasks.

Otherwise, continue on to the tutorials in Java Quickstart and Dependency Management for Java Projects.



[4] There are command line switches to change this behavior. See Command-Line Interface)

Build Init Plugin

The Build Init plugin is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

The Gradle Build Init plugin can be used to bootstrap the process of creating a new Gradle build. It supports creating brand new projects of different types as well as converting existing builds (e.g. An Apache Maven build) to be Gradle builds.

Gradle plugins typically need to be applied to a project before they can be used (see the section called “Using plugins”). The Build Init plugin is an automatically applied plugin, which means you do not need to apply it explicitly. To use the plugin, simply execute the task named init where you would like to create the Gradle build. There is no need to create a “stub” build.gradle file in order to apply the plugin.

It also leverages the wrapper task to generate the Gradle Wrapper files for the project.

Tasks

The plugin adds the following tasks to the project:

Table 3. Build Init plugin - tasks

Task name Depends on Type Description

init

wrapper

InitBuild

Generates a Gradle project.

wrapper

-

Wrapper

Generates Gradle wrapper files.

What to set up

The init supports different build setup types. The type is specified by supplying a --type argument value. For example, to create a Java library project simply execute: gradle init --type java-library.

If a --type parameter is not supplied, Gradle will attempt to infer the type from the environment. For example, it will infer a type value of “pom” if it finds a pom.xml to convert to a Gradle build.

If the type could not be inferred, the type “basic” will be used.

The init plugin also supports generating build scripts using either the Gradle Groovy DSL or the Gradle Kotlin DSL. The build script DSL to use defaults to the Groovy DSL and is specified by supplying a --dsl argument value. For example, to create a Java library project with Kotlin DSL build scripts simply execute: gradle init --type java-library --dsl kotlin.

All build setup types include the setup of the Gradle Wrapper.

Note that the migration from Maven builds only supports the Groovy DSL for generated build scripts.

Build init types

As this plugin is currently incubating, only a few build init types are currently supported. More types will be added in future Gradle releases.

pom” (Maven conversion)

The “pom” type can be used to convert an Apache Maven build to a Gradle build. This works by converting the POM to one or more Gradle files. It is only able to be used if there is a valid “pom.xml” file in the directory that the init task is invoked in or, if invoked via the “-p” command line option, in the specified project directory. This “pom” type will be automatically inferred if such a file exists.

The Maven conversion implementation was inspired by the maven2gradle tool that was originally developed by Gradle community members.

The conversion process has the following features:

  • Uses effective POM and effective settings (support for POM inheritance, dependency management, properties)

  • Supports both single module and multimodule projects

  • Supports custom module names (that differ from directory names)

  • Generates general metadata - id, description and version

  • Applies maven, java and war plugins (as needed)

  • Supports packaging war projects as jars if needed

  • Generates dependencies (both external and inter-module)

  • Generates download repositories (inc. local Maven repository)

  • Adjusts Java compiler settings

  • Supports packaging of sources and tests

  • Supports TestNG runner

  • Generates global exclusions from Maven enforcer plugin settings

java-application

The “java-application” build init type is not inferable. It must be explicitly specified.

It has the following features:

  • Uses the “application” plugin to produce a command-line application implemented using Java

  • Uses the “jcenter” dependency repository

  • Uses JUnit for testing

  • Has directories in the conventional locations for source code

  • Contains a sample class and unit test, if there are no existing source or test files

Alternative test framework can be specified by supplying a --test-framework argument value. To use a different test framework, execute one of the following commands:

  • gradle init --type java-application --test-framework spock: Uses Spock for testing instead of JUnit

  • gradle init --type java-application --test-framework testng: Uses TestNG for testing instead of JUnit

java-library

The “java-library” build init type is not inferable. It must be explicitly specified.

It has the following features:

  • Uses the “java” plugin to produce a library Jar

  • Uses the “jcenter” dependency repository

  • Uses JUnit for testing

  • Has directories in the conventional locations for source code

  • Contains a sample class and unit test, if there are no existing source or test files

Alternative test framework can be specified by supplying a --test-framework argument value. To use a different test framework, execute one of the following commands:

  • gradle init --type java-library --test-framework spock: Uses Spock for testing instead of JUnit

  • gradle init --type java-library --test-framework testng: Uses TestNG for testing instead of JUnit

scala-library

The “scala-library” build init type is not inferable. It must be explicitly specified.

It has the following features:

  • Uses the “scala” plugin to produce a library Jar

  • Uses the “jcenter” dependency repository

  • Uses Scala 2.10

  • Uses ScalaTest for testing

  • Has directories in the conventional locations for source code

  • Contains a sample scala class and an associated ScalaTest test suite, if there are no existing source or test files

  • Uses the Zinc Scala compiler by default

groovy-library

The “groovy-library” build init type is not inferable. It must be explicitly specified.

It has the following features:

  • Uses the “groovy” plugin to produce a library Jar

  • Uses the “jcenter” dependency repository

  • Uses Groovy 2.x

  • Uses Spock testing framework for testing

  • Has directories in the conventional locations for source code

  • Contains a sample Groovy class and an associated Spock specification, if there are no existing source or test files

groovy-application

The “groovy-application” build init type is not inferable. It must be explicitly specified.

It has the following features:

  • Uses the “groovy” plugin

  • Uses the “application” plugin to produce a command-line application implemented using Groovy

  • Uses the “jcenter” dependency repository

  • Uses Groovy 2.x

  • Uses Spock testing framework for testing

  • Has directories in the conventional locations for source code

  • Contains a sample Groovy class and an associated Spock specification, if there are no existing source or test files

“basic”

The “basic” build init type is useful for creating a fresh new Gradle project. It creates a sample build.gradle file, with comments and links to help get started.

This type is used when no type was explicitly specified, and no type could be inferred.

Writing Build Scripts

This chapter looks at some of the details of writing a build script.

The Gradle build language

Gradle provides a domain specific language, or DSL, for describing builds. This build language is based on Groovy, with some additions to make it easier to describe a build.

A build script can contain any Groovy language element.[5] Gradle assumes that each build script is encoded using UTF-8.

The Project API

In the tutorial in Java Quickstart we used, for example, the apply() method. Where does this method come from? We said earlier that the build script defines a project in Gradle. For each project in the build, Gradle creates an object of type Project and associates this Project object with the build script. As the build script executes, it configures this Project object:

Getting help writing build scripts

Don’t forget that your build script is simply Groovy code that drives the Gradle API. And the Project interface is your starting point for accessing everything in the Gradle API. So, if you’re wondering what 'tags' are available in your build script, you can start with the documentation for the Project interface.

  • Any method you call in your build script which is not defined in the build script, is delegated to the Project object.

  • Any property you access in your build script, which is not defined in the build script, is delegated to the Project object.

Let’s try this out and try to access the name property of the Project object.

Example 70. Accessing property of the Project object

build.gradle

println name
println project.name

Output of gradle -q check

> gradle -q check
projectApi
projectApi

Both println statements print out the same property. The first uses auto-delegation to the Project object, for properties not defined in the build script. The other statement uses the project property available to any build script, which returns the associated Project object. Only if you define a property or a method which has the same name as a member of the Project object, would you need to use the project property.

Standard project properties

The Project object provides some standard properties, which are available in your build script. The following table lists a few of the commonly used ones.

Table 4. Project Properties

Name Type Default Value

project

Project

The Project instance

name

String

The name of the project directory.

path

String

The absolute path of the project.

description

String

A description for the project.

projectDir

File

The directory containing the build script.

buildDir

File

projectDir/build

group

Object

unspecified

version

Object

unspecified

ant

AntBuilder

An AntBuilder instance

The Script API

When Gradle executes a script, it compiles the script into a class which implements Script. This means that all of the properties and methods declared by the Script interface are available in your script.

Declaring variables

There are two kinds of variables that can be declared in a build script: local variables and extra properties.

Local variables

Local variables are declared with the def keyword. They are only visible in the scope where they have been declared. Local variables are a feature of the underlying Groovy language.

Example 71. Using local variables

build.gradle

def dest = "dest"

task copy(type: Copy) {
    from "source"
    into dest
}

Extra properties

All enhanced objects in Gradle’s domain model can hold extra user-defined properties. This includes, but is not limited to, projects, tasks, and source sets. Extra properties can be added, read and set via the owning object’s ext property. Alternatively, an ext block can be used to add multiple properties at once.

Example 72. Using extra properties

build.gradle

apply plugin: "java"

ext {
    springVersion = "3.1.0.RELEASE"
    emailNotification = "build@master.org"
}

sourceSets.all { ext.purpose = null }

sourceSets {
    main {
        purpose = "production"
    }
    test {
        purpose = "test"
    }
    plugin {
        purpose = "production"
    }
}

task printProperties {
    doLast {
        println springVersion
        println emailNotification
        sourceSets.matching { it.purpose == "production" }.each { println it.name }
    }
}

Output of gradle -q printProperties

> gradle -q printProperties
3.1.0.RELEASE
build@master.org
main
plugin

In this example, an ext block adds two extra properties to the project object. Additionally, a property named purpose is added to each source set by setting ext.purpose to null (null is a permissible value). Once the properties have been added, they can be read and set like predefined properties.

By requiring special syntax for adding a property, Gradle can fail fast when an attempt is made to set a (predefined or extra) property but the property is misspelled or does not exist. Extra properties can be accessed from anywhere their owning object can be accessed, giving them a wider scope than local variables. Extra properties on a project are visible from its subprojects.

For further details on extra properties and their API, see the ExtraPropertiesExtension class in the API documentation.

Configuring arbitrary objects

You can configure arbitrary objects in the following very readable way.

Example 73. Configuring arbitrary objects

build.gradle

task configure {
    doLast {
        def pos = configure(new java.text.FieldPosition(10)) {
            beginIndex = 1
            endIndex = 5
        }
        println pos.beginIndex
        println pos.endIndex
    }
}

Output of gradle -q configure

> gradle -q configure
1
5

Configuring arbitrary objects using an external script

You can also configure arbitrary objects using an external script.

Example 74. Configuring arbitrary objects using a script

build.gradle

task configure {
    doLast {
        def pos = new java.text.FieldPosition(10)
        // Apply the script
        apply from: 'other.gradle', to: pos
        println pos.beginIndex
        println pos.endIndex
    }
}

other.gradle

// Set properties.
beginIndex = 1
endIndex = 5

Output of gradle -q configure

> gradle -q configure
1
5

Some Groovy basics

The Groovy language provides plenty of features for creating DSLs, and the Gradle build language takes advantage of these. Understanding how the build language works will help you when you write your build script, and in particular, when you start to write custom plugins and tasks.

Groovy JDK

Groovy adds lots of useful methods to the standard Java classes. For example, Iterable gets an each method, which iterates over the elements of the Iterable:

Example 75. Groovy JDK methods

build.gradle

// Iterable gets an each() method
configurations.runtime.each { File f -> println f }

Have a look at http://groovy-lang.org/gdk.html for more details.

Property accessors

Groovy automatically converts a property reference into a call to the appropriate getter or setter method.

Example 76. Property accessors

build.gradle

// Using a getter method
println project.buildDir
println getProject().getBuildDir()

// Using a setter method
project.buildDir = 'target'
getProject().setBuildDir('target')

Optional parentheses on method calls

Parentheses are optional for method calls.

Example 77. Method call without parentheses

build.gradle

test.systemProperty 'some.prop', 'value'
test.systemProperty('some.prop', 'value')

List and map literals

Groovy provides some shortcuts for defining List and Map instances. Both kinds of literals are straightforward, but map literals have some interesting twists.

For instance, the “apply” method (where you typically apply plugins) actually takes a map parameter. However, when you have a line like “apply plugin:'java'”, you aren’t actually using a map literal, you’re actually using “named parameters”, which have almost exactly the same syntax as a map literal (without the wrapping brackets). That named parameter list gets converted to a map when the method is called, but it doesn’t start out as a map.

Example 78. List and map literals

build.gradle

// List literal
test.includes = ['org/gradle/api/**', 'org/gradle/internal/**']

List<String> list = new ArrayList<String>()
list.add('org/gradle/api/**')
list.add('org/gradle/internal/**')
test.includes = list

// Map literal.
Map<String, String> map = [key1:'value1', key2: 'value2']

// Groovy will coerce named arguments
// into a single map argument
apply plugin: 'java'

Closures as the last parameter in a method

The Gradle DSL uses closures in many places. You can find out more about closures here. When the last parameter of a method is a closure, you can place the closure after the method call:

Example 79. Closure as method parameter

build.gradle

repositories {
    println "in a closure"
}
repositories() { println "in a closure" }
repositories({ println "in a closure" })

Closure delegate

Each closure has a delegate object, which Groovy uses to look up variable and method references which are not local variables or parameters of the closure. Gradle uses this for configuration closures, where the delegate object is set to the object to be configured.

Example 80. Closure delegates

build.gradle

dependencies {
    assert delegate == project.dependencies
    testCompile('junit:junit:4.12')
    delegate.testCompile('junit:junit:4.12')
}

Default imports

To make build scripts more concise, Gradle automatically adds a set of import statements to the Gradle scripts. This means that instead of using throw new org.gradle.api.tasks.StopExecutionException() you can just type throw new StopExecutionException() instead.

Listed below are the imports added to each script:

Gradle default imports. 

import org.gradle.*
import org.gradle.api.*
import org.gradle.api.artifacts.*
import org.gradle.api.artifacts.cache.*
import org.gradle.api.artifacts.component.*
import org.gradle.api.artifacts.dsl.*
import org.gradle.api.artifacts.ivy.*
import org.gradle.api.artifacts.maven.*
import org.gradle.api.artifacts.query.*
import org.gradle.api.artifacts.repositories.*
import org.gradle.api.artifacts.result.*
import org.gradle.api.artifacts.transform.*
import org.gradle.api.artifacts.type.*
import org.gradle.api.attributes.*
import org.gradle.api.component.*
import org.gradle.api.credentials.*
import org.gradle.api.distribution.*
import org.gradle.api.distribution.plugins.*
import org.gradle.api.dsl.*
import org.gradle.api.execution.*
import org.gradle.api.file.*
import org.gradle.api.initialization.*
import org.gradle.api.initialization.dsl.*
import org.gradle.api.invocation.*
import org.gradle.api.java.archives.*
import org.gradle.api.logging.*
import org.gradle.api.logging.configuration.*
import org.gradle.api.model.*
import org.gradle.api.plugins.*
import org.gradle.api.plugins.announce.*
import org.gradle.api.plugins.antlr.*
import org.gradle.api.plugins.buildcomparison.gradle.*
import org.gradle.api.plugins.osgi.*
import org.gradle.api.plugins.quality.*
import org.gradle.api.plugins.scala.*
import org.gradle.api.provider.*
import org.gradle.api.publish.*
import org.gradle.api.publish.ivy.*
import org.gradle.api.publish.ivy.plugins.*
import org.gradle.api.publish.ivy.tasks.*
import org.gradle.api.publish.maven.*
import org.gradle.api.publish.maven.plugins.*
import org.gradle.api.publish.maven.tasks.*
import org.gradle.api.publish.plugins.*
import org.gradle.api.publish.tasks.*
import org.gradle.api.reflect.*
import org.gradle.api.reporting.*
import org.gradle.api.reporting.components.*
import org.gradle.api.reporting.dependencies.*
import org.gradle.api.reporting.dependents.*
import org.gradle.api.reporting.model.*
import org.gradle.api.reporting.plugins.*
import org.gradle.api.resources.*
import org.gradle.api.specs.*
import org.gradle.api.tasks.*
import org.gradle.api.tasks.ant.*
import org.gradle.api.tasks.application.*
import org.gradle.api.tasks.bundling.*
import org.gradle.api.tasks.compile.*
import org.gradle.api.tasks.diagnostics.*
import org.gradle.api.tasks.incremental.*
import org.gradle.api.tasks.javadoc.*
import org.gradle.api.tasks.scala.*
import org.gradle.api.tasks.testing.*
import org.gradle.api.tasks.testing.junit.*
import org.gradle.api.tasks.testing.testng.*
import org.gradle.api.tasks.util.*
import org.gradle.api.tasks.wrapper.*
import org.gradle.authentication.*
import org.gradle.authentication.aws.*
import org.gradle.authentication.http.*
import org.gradle.buildinit.plugins.*
import org.gradle.buildinit.tasks.*
import org.gradle.caching.*
import org.gradle.caching.configuration.*
import org.gradle.caching.http.*
import org.gradle.caching.local.*
import org.gradle.concurrent.*
import org.gradle.external.javadoc.*
import org.gradle.ide.visualstudio.*
import org.gradle.ide.visualstudio.plugins.*
import org.gradle.ide.visualstudio.tasks.*
import org.gradle.ide.xcode.*
import org.gradle.ide.xcode.plugins.*
import org.gradle.ide.xcode.tasks.*
import org.gradle.ivy.*
import org.gradle.jvm.*
import org.gradle.jvm.application.scripts.*
import org.gradle.jvm.application.tasks.*
import org.gradle.jvm.platform.*
import org.gradle.jvm.plugins.*
import org.gradle.jvm.tasks.*
import org.gradle.jvm.tasks.api.*
import org.gradle.jvm.test.*
import org.gradle.jvm.toolchain.*
import org.gradle.language.*
import org.gradle.language.assembler.*
import org.gradle.language.assembler.plugins.*
import org.gradle.language.assembler.tasks.*
import org.gradle.language.base.*
import org.gradle.language.base.artifact.*
import org.gradle.language.base.compile.*
import org.gradle.language.base.plugins.*
import org.gradle.language.base.sources.*
import org.gradle.language.c.*
import org.gradle.language.c.plugins.*
import org.gradle.language.c.tasks.*
import org.gradle.language.coffeescript.*
import org.gradle.language.cpp.*
import org.gradle.language.cpp.plugins.*
import org.gradle.language.cpp.tasks.*
import org.gradle.language.java.*
import org.gradle.language.java.artifact.*
import org.gradle.language.java.plugins.*
import org.gradle.language.java.tasks.*
import org.gradle.language.javascript.*
import org.gradle.language.jvm.*
import org.gradle.language.jvm.plugins.*
import org.gradle.language.jvm.tasks.*
import org.gradle.language.nativeplatform.*
import org.gradle.language.nativeplatform.tasks.*
import org.gradle.language.objectivec.*
import org.gradle.language.objectivec.plugins.*
import org.gradle.language.objectivec.tasks.*
import org.gradle.language.objectivecpp.*
import org.gradle.language.objectivecpp.plugins.*
import org.gradle.language.objectivecpp.tasks.*
import org.gradle.language.plugins.*
import org.gradle.language.rc.*
import org.gradle.language.rc.plugins.*
import org.gradle.language.rc.tasks.*
import org.gradle.language.routes.*
import org.gradle.language.scala.*
import org.gradle.language.scala.plugins.*
import org.gradle.language.scala.tasks.*
import org.gradle.language.scala.toolchain.*
import org.gradle.language.swift.*
import org.gradle.language.swift.plugins.*
import org.gradle.language.swift.tasks.*
import org.gradle.language.twirl.*
import org.gradle.maven.*
import org.gradle.model.*
import org.gradle.nativeplatform.*
import org.gradle.nativeplatform.platform.*
import org.gradle.nativeplatform.plugins.*
import org.gradle.nativeplatform.tasks.*
import org.gradle.nativeplatform.test.*
import org.gradle.nativeplatform.test.cpp.*
import org.gradle.nativeplatform.test.cpp.plugins.*
import org.gradle.nativeplatform.test.cunit.*
import org.gradle.nativeplatform.test.cunit.plugins.*
import org.gradle.nativeplatform.test.cunit.tasks.*
import org.gradle.nativeplatform.test.googletest.*
import org.gradle.nativeplatform.test.googletest.plugins.*
import org.gradle.nativeplatform.test.plugins.*
import org.gradle.nativeplatform.test.tasks.*
import org.gradle.nativeplatform.test.xctest.*
import org.gradle.nativeplatform.test.xctest.plugins.*
import org.gradle.nativeplatform.test.xctest.tasks.*
import org.gradle.nativeplatform.toolchain.*
import org.gradle.nativeplatform.toolchain.plugins.*
import org.gradle.normalization.*
import org.gradle.platform.base.*
import org.gradle.platform.base.binary.*
import org.gradle.platform.base.component.*
import org.gradle.platform.base.plugins.*
import org.gradle.play.*
import org.gradle.play.distribution.*
import org.gradle.play.platform.*
import org.gradle.play.plugins.*
import org.gradle.play.plugins.ide.*
import org.gradle.play.tasks.*
import org.gradle.play.toolchain.*
import org.gradle.plugin.devel.*
import org.gradle.plugin.devel.plugins.*
import org.gradle.plugin.devel.tasks.*
import org.gradle.plugin.management.*
import org.gradle.plugin.use.*
import org.gradle.plugins.ear.*
import org.gradle.plugins.ear.descriptor.*
import org.gradle.plugins.ide.api.*
import org.gradle.plugins.ide.eclipse.*
import org.gradle.plugins.ide.idea.*
import org.gradle.plugins.javascript.base.*
import org.gradle.plugins.javascript.coffeescript.*
import org.gradle.plugins.javascript.envjs.*
import org.gradle.plugins.javascript.envjs.browser.*
import org.gradle.plugins.javascript.envjs.http.*
import org.gradle.plugins.javascript.envjs.http.simple.*
import org.gradle.plugins.javascript.jshint.*
import org.gradle.plugins.javascript.rhino.*
import org.gradle.plugins.signing.*
import org.gradle.plugins.signing.signatory.*
import org.gradle.plugins.signing.signatory.pgp.*
import org.gradle.plugins.signing.type.*
import org.gradle.plugins.signing.type.pgp.*
import org.gradle.process.*
import org.gradle.testing.base.*
import org.gradle.testing.base.plugins.*
import org.gradle.testing.jacoco.plugins.*
import org.gradle.testing.jacoco.tasks.*
import org.gradle.testing.jacoco.tasks.rules.*
import org.gradle.testkit.runner.*
import org.gradle.vcs.*
import org.gradle.vcs.git.*
import org.gradle.workers.*



[5] Any language element except for statement labels.

Authoring Tasks

In the introductory tutorial (Build Script Basics) you learned how to create simple tasks. You also learned how to add additional behavior to these tasks later on, and you learned how to create dependencies between tasks. This was all about simple tasks, but Gradle takes the concept of tasks further. Gradle supports enhanced tasks, which are tasks that have their own properties and methods. This is really different from what you are used to with Ant targets. Such enhanced tasks are either provided by you or built into Gradle.

Task outcomes

When Gradle executes a task, it can label the task with different outcomes in the console UI and via the Tooling API (see Embedding Gradle using the Tooling API). These labels are based on if a task has actions to execute, if it should execute those actions, if it did execute those actions and if those actions made any changes.

Table 5. Details about task outcomes

Outcome label Description of outcome Situations that have this outcome

(no label) or EXECUTED

Task executed its actions.

  • Used whenever a task has actions and Gradle has determined they should be executed as part of a build.

  • Used whenever a task has no actions and some dependencies, and any of the dependencies are executed. See also the section called “Lifecycle tasks”.

UP-TO-DATE

Task’s outputs did not change.

FROM-CACHE

Task’s outputs could be found from a previous execution.

  • Used when a task has outputs restored from the build cache. See Build Cache.

SKIPPED

Task did not execute its actions.

NO-SOURCE

Task did not need to execute its actions.

  • Used when a task has inputs and outputs, but no sources. For example, source files are .java files for JavaCompile.

Defining tasks

We have already seen how to define tasks using a keyword style in Build Script Basics. There are a few variations on this style, which you may need to use in certain situations. For example, the keyword style does not work in expressions.

Example 81. Defining tasks

build.gradle

task(hello) {
    doLast {
        println "hello"
    }
}

task(copy, type: Copy) {
    from(file('srcDir'))
    into(buildDir)
}

You can also use strings for the task names:

Example 82. Defining tasks - using strings for task names

build.gradle

task('hello') {
    doLast {
        println "hello"
    }
}

task('copy', type: Copy) {
    from(file('srcDir'))
    into(buildDir)
}

There is an alternative syntax for defining tasks, which you may prefer to use:

Example 83. Defining tasks with alternative syntax

build.gradle

tasks.create(name: 'hello') {
    doLast {
        println "hello"
    }
}

tasks.create(name: 'copy', type: Copy) {
    from(file('srcDir'))
    into(buildDir)
}

Here we add tasks to the tasks collection. Have a look at TaskContainer for more variations of the create() method.

Locating tasks

You often need to locate the tasks that you have defined in the build file, for example, to configure them or use them for dependencies. There are a number of ways of doing this. Firstly, each task is available as a property of the project, using the task name as the property name:

Example 84. Accessing tasks as properties

build.gradle

task hello

println hello.name
println project.hello.name

Tasks are also available through the tasks collection.

Example 85. Accessing tasks via tasks collection

build.gradle

task hello

println tasks.hello.name
println tasks['hello'].name

You can access tasks from any project using the task’s path using the tasks.getByPath() method. You can call the getByPath() method with a task name, or a relative path, or an absolute path.

Example 86. Accessing tasks by path

build.gradle

project(':projectA') {
    task hello
}

task hello

println tasks.getByPath('hello').path
println tasks.getByPath(':hello').path
println tasks.getByPath('projectA:hello').path
println tasks.getByPath(':projectA:hello').path

Output of gradle -q hello

> gradle -q hello
:hello
:hello
:projectA:hello
:projectA:hello

Have a look at TaskContainer for more options for locating tasks.

Configuring tasks

As an example, let’s look at the Copy task provided by Gradle. To create a Copy task for your build, you can declare in your build script:

Example 87. Creating a copy task

build.gradle

task myCopy(type: Copy)

This creates a copy task with no default behavior. The task can be configured using its API (see Copy). The following examples show several different ways to achieve the same configuration.

Just to be clear, realize that the name of this task is “myCopy”, but it is of typeCopy”. You can have multiple tasks of the same type, but with different names. You’ll find this gives you a lot of power to implement cross-cutting concerns across all tasks of a particular type.

Example 88. Configuring a task - various ways

build.gradle

Copy myCopy = task(myCopy, type: Copy)
myCopy.from 'resources'
myCopy.into 'target'
myCopy.include('**/*.txt', '**/*.xml', '**/*.properties')

This is similar to the way we would configure objects in Java. You have to repeat the context (myCopy) in the configuration statement every time. This is a redundancy and not very nice to read.

There is another way of configuring a task. It also preserves the context and it is arguably the most readable. It is usually our favorite.

Example 89. Configuring a task - with closure

build.gradle

task myCopy(type: Copy)

myCopy {
   from 'resources'
   into 'target'
   include('**/*.txt', '**/*.xml', '**/*.properties')
}

This works for any task. Line 3 of the example is just a shortcut for the tasks.getByName() method. It is important to note that if you pass a closure to the getByName() method, this closure is applied to configure the task, not when the task executes.

You can also use a configuration closure when you define a task.

Example 90. Defining a task with closure

build.gradle

task copy(type: Copy) {
   from 'resources'
   into 'target'
   include('**/*.txt', '**/*.xml', '**/*.properties')
}

Don’t forget about the build phases

A task has both configuration and actions. When using the doLast, you are simply using a shortcut to define an action. Code defined in the configuration section of your task will get executed during the configuration phase of the build regardless of what task was targeted. See Build Lifecycle for more details about the build lifecycle.

Adding dependencies to a task

There are several ways you can define the dependencies of a task. In the section called “Task dependencies” you were introduced to defining dependencies using task names. Task names can refer to tasks in the same project as the task, or to tasks in other projects. To refer to a task in another project, you prefix the name of the task with the path of the project it belongs to. The following is an example which adds a dependency from projectA:taskX to projectB:taskY:

Example 91. Adding dependency on task from another project

build.gradle

project('projectA') {
    task taskX(dependsOn: ':projectB:taskY') {
        doLast {
            println 'taskX'
        }
    }
}

project('projectB') {
    task taskY {
        doLast {
            println 'taskY'
        }
    }
}

Output of gradle -q taskX

> gradle -q taskX
taskY
taskX

Instead of using a task name, you can define a dependency using a Task object, as shown in this example:

Example 92. Adding dependency using task object

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}

task taskY {
    doLast {
        println 'taskY'
    }
}

taskX.dependsOn taskY

Output of gradle -q taskX

> gradle -q taskX
taskY
taskX

For more advanced uses, you can define a task dependency using a closure. When evaluated, the closure is passed the task whose dependencies are being calculated. The closure should return a single Task or collection of Task objects, which are then treated as dependencies of the task. The following example adds a dependency from taskX to all the tasks in the project whose name starts with lib:

Example 93. Adding dependency using closure

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}

taskX.dependsOn {
    tasks.findAll { task -> task.name.startsWith('lib') }
}

task lib1 {
    doLast {
        println 'lib1'
    }
}

task lib2 {
    doLast {
        println 'lib2'
    }
}

task notALib {
    doLast {
        println 'notALib'
    }
}

Output of gradle -q taskX

> gradle -q taskX
lib1
lib2
taskX

For more information about task dependencies, see the Task API.

Ordering tasks

Task ordering is an incubating feature. Please be aware that this feature may change in later Gradle versions.

In some cases it is useful to control the order in which 2 tasks will execute, without introducing an explicit dependency between those tasks. The primary difference between a task ordering and a task dependency is that an ordering rule does not influence which tasks will be executed, only the order in which they will be executed.

Task ordering can be useful in a number of scenarios:

  • Enforce sequential ordering of tasks: e.g. 'build' never runs before 'clean'.

  • Run build validations early in the build: e.g. validate I have the correct credentials before starting the work for a release build.

  • Get feedback faster by running quick verification tasks before long verification tasks: e.g. unit tests should run before integration tests.

  • A task that aggregates the results of all tasks of a particular type: e.g. test report task combines the outputs of all executed test tasks.

There are two ordering rules available: “must run after” and “should run after”.

When you use the “must run after” ordering rule you specify that taskB must always run after taskA, whenever both taskA and taskB will be run. This is expressed as taskB.mustRunAfter(taskA). The “should run after” ordering rule is similar but less strict as it will be ignored in two situations. Firstly if using that rule introduces an ordering cycle. Secondly when using parallel execution and all dependencies of a task have been satisfied apart from the “should run after” task, then this task will be run regardless of whether its “should run after” dependencies have been run or not. You should use “should run after” where the ordering is helpful but not strictly required.

With these rules present it is still possible to execute taskA without taskB and vice-versa.

Example 94. Adding a 'must run after' task ordering

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}
taskY.mustRunAfter taskX

Output of gradle -q taskY taskX

> gradle -q taskY taskX
taskX
taskY

Example 95. Adding a 'should run after' task ordering

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}
taskY.shouldRunAfter taskX

Output of gradle -q taskY taskX

> gradle -q taskY taskX
taskX
taskY

In the examples above, it is still possible to execute taskY without causing taskX to run:

Example 96. Task ordering does not imply task execution

Output of gradle -q taskY

> gradle -q taskY
taskY

To specify a “must run after” or “should run after” ordering between 2 tasks, you use the Task.mustRunAfter(java.lang.Object[]) and Task.shouldRunAfter(java.lang.Object[]) methods. These methods accept a task instance, a task name or any other input accepted by Task.dependsOn(java.lang.Object[]).

Note that “B.mustRunAfter(A)” or “B.shouldRunAfter(A)” does not imply any execution dependency between the tasks:

  • It is possible to execute tasks A and B independently. The ordering rule only has an effect when both tasks are scheduled for execution.

  • When run with --continue, it is possible for B to execute in the event that A fails.

As mentioned before, the “should run after” ordering rule will be ignored if it introduces an ordering cycle:

Example 97. A 'should run after' task ordering is ignored if it introduces an ordering cycle

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}
task taskZ {
    doLast {
        println 'taskZ'
    }
}
taskX.dependsOn taskY
taskY.dependsOn taskZ
taskZ.shouldRunAfter taskX

Output of gradle -q taskX

> gradle -q taskX
taskZ
taskY
taskX

Adding a description to a task

You can add a description to your task. This description is displayed when executing gradle tasks.

Example 98. Adding a description to a task

build.gradle

task copy(type: Copy) {
   description 'Copies the resource directory to the target directory.'
   from 'resources'
   into 'target'
   include('**/*.txt', '**/*.xml', '**/*.properties')
}

Replacing tasks

Sometimes you want to replace a task. For example, if you want to exchange a task added by the Java plugin with a custom task of a different type. You can achieve this with:

Example 99. Overwriting a task

build.gradle

task copy(type: Copy)

task copy(overwrite: true) {
    doLast {
        println('I am the new one.')
    }
}

Output of gradle -q copy

> gradle -q copy
I am the new one.

This will replace a task of type Copy with the task you’ve defined, because it uses the same name. When you define the new task, you have to set the overwrite property to true. Otherwise Gradle throws an exception, saying that a task with that name already exists.

Skipping tasks

Gradle offers multiple ways to skip the execution of a task.

Using a predicate

You can use the onlyIf() method to attach a predicate to a task. The task’s actions are only executed if the predicate evaluates to true. You implement the predicate as a closure. The closure is passed the task as a parameter, and should return true if the task should execute and false if the task should be skipped. The predicate is evaluated just before the task is due to be executed.

Example 100. Skipping a task using a predicate

build.gradle

task hello {
    doLast {
        println 'hello world'
    }
}

hello.onlyIf { !project.hasProperty('skipHello') }

Output of gradle hello -PskipHello

> gradle hello -PskipHello
:hello SKIPPED

BUILD SUCCESSFUL in 0s

Using StopExecutionException

If the logic for skipping a task can’t be expressed with a predicate, you can use the StopExecutionException. If this exception is thrown by an action, the further execution of this action as well as the execution of any following action of this task is skipped. The build continues with executing the next task.

Example 101. Skipping tasks with StopExecutionException

build.gradle

task compile {
    doLast {
        println 'We are doing the compile.'
    }
}

compile.doFirst {
    // Here you would put arbitrary conditions in real life.
    // But this is used in an integration test so we want defined behavior.
    if (true) { throw new StopExecutionException() }
}
task myTask(dependsOn: 'compile') {
    doLast {
        println 'I am not affected'
    }
}

Output of gradle -q myTask

> gradle -q myTask
I am not affected

This feature is helpful if you work with tasks provided by Gradle. It allows you to add conditional execution of the built-in actions of such a task.[6]

Enabling and disabling tasks

Every task has an enabled flag which defaults to true. Setting it to false prevents the execution of any of the task’s actions. A disabled task will be labelled SKIPPED.

Example 102. Enabling and disabling tasks

build.gradle

task disableMe {
    doLast {
        println 'This should not be printed if the task is disabled.'
    }
}
disableMe.enabled = false

Output of gradle disableMe

> gradle disableMe
:disableMe SKIPPED

BUILD SUCCESSFUL in 0s

Up-to-date checks (AKA Incremental Build)

An important part of any build tool is the ability to avoid doing work that has already been done. Consider the process of compilation. Once your source files have been compiled, there should be no need to recompile them unless something has changed that affects the output, such as the modification of a source file or the removal of an output file. And compilation can take a significant amount of time, so skipping the step when it’s not needed saves a lot of time.

Gradle supports this behavior out of the box through a feature it calls incremental build. You have almost certainly already seen it in action: it’s active nearly every time the UP-TO-DATE text appears next to the name of a task when you run a build. Task outcomes are described in the section called “Task outcomes”.

How does incremental build work? And what does it take to make use of it in your own tasks? Let’s take a look.

Task inputs and outputs

In the most common case, a task takes some inputs and generates some outputs. If we use the compilation example from earlier, we can see that the source files are the inputs and, in the case of Java, the generated class files are the outputs. Other inputs might include things like whether debug information should be included.

Figure 5. Example task inputs and outputs

Example task inputs and outputs

An important characteristic of an input is that it affects one or more outputs, as you can see from the previous figure. Different bytecode is generated depending on the content of the source files and the minimum version of the Java runtime you want to run the code on. That makes them task inputs. But whether compilation has 500MB or 600MB of maximum memory available, determined by the memoryMaximumSize property, has no impact on what bytecode gets generated. In Gradle terminology, memoryMaximumSize is just an internal task property.

As part of incremental build, Gradle tests whether any of the task inputs or outputs have changed since the last build. If they haven’t, Gradle can consider the task up to date and therefore skip executing its actions. Also note that incremental build won’t work unless a task has at least one task output, although tasks usually have at least one input as well.

What this means for build authors is simple: you need to tell Gradle which task properties are inputs and which are outputs. If a task property affects the output, be sure to register it as an input, otherwise the task will be considered up to date when it’s not. Conversely, don’t register properties as inputs if they don’t affect the output, otherwise the task will potentially execute when it doesn’t need to. Also be careful of non-deterministic tasks that may generate different output for exactly the same inputs: these should not be configured for incremental build as the up-to-date checks won’t work.

Let’s now look at how you can register task properties as inputs and outputs.

Custom task types

If you’re implementing a custom task as a class, then it takes just two steps to make it work with incremental build:

  1. Create typed properties (via getter methods) for each of your task inputs and outputs

  2. Add the appropriate annotation to each of those properties

Annotations must be placed on getters or on Groovy properties. Annotations placed on setters, or on a Java field without a corresponding annotated getter are ignored.

Gradle supports three main categories of inputs and outputs:

  • Simple values

    Things like strings and numbers. More generally, a simple value can have any type that implements Serializable.

  • Filesystem types

    These consist of the standard File class but also derivatives of Gradle’s FileCollection type and anything else that can be passed to either the Project.file(java.lang.Object) method - for single file/directory properties - or the Project.files(java.lang.Object[]) method.

  • Nested values

    Custom types that don’t conform to the other two categories but have their own properties that are inputs or outputs. In effect, the task inputs or outputs are nested inside these custom types.

As an example, imagine you have a task that processes templates of varying types, such as FreeMarker, Velocity, Moustache, etc. It takes template source files and combines them with some model data to generate populated versions of the template files.

This task will have three inputs and one output:

  • Template source files

  • Model data

  • Template engine

  • Where the output files are written

When you’re writing a custom task class, it’s easy to register properties as inputs or outputs via annotations. To demonstrate, here is a skeleton task implementation with some suitable inputs and outputs, along with their annotations:

Example 103. Custom task class

buildSrc/src/main/java/org/example/ProcessTemplates.java

package org.example;

import java.io.File;
import java.util.HashMap;
import org.gradle.api.*;
import org.gradle.api.file.*;
import org.gradle.api.tasks.*;

public class ProcessTemplates extends DefaultTask {
    private TemplateEngineType templateEngine;
    private FileCollection sourceFiles;
    private TemplateData templateData;
    private File outputDir;

    @Input
    public TemplateEngineType getTemplateEngine() {
        return this.templateEngine;
    }

    @InputFiles
    public FileCollection getSourceFiles() {
        return this.sourceFiles;
    }

    @Nested
    public TemplateData getTemplateData() {
        return this.templateData;
    }

    @OutputDirectory
    public File getOutputDir() { return this.outputDir; }

    // + setter methods for the above - assume we’ve defined them

    @TaskAction
    public void processTemplates() {
        // ...
    }
}

buildSrc/src/main/java/org/example/TemplateData.java

package org.example;

import java.util.HashMap;
import java.util.Map;
import org.gradle.api.tasks.Input;

public class TemplateData {
    private String name;
    private Map<String, String> variables;

    public TemplateData(String name, Map<String, String> variables) {
        this.name = name;
        this.variables = new HashMap<>(variables);
    }

    @Input
    public String getName() { return this.name; }

    @Input
    public Map<String, String> getVariables() {
        return this.variables;
    }
}

Output of gradle processTemplates

> gradle processTemplates
:processTemplates

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

Output of gradle processTemplates

> gradle processTemplates
:processTemplates UP-TO-DATE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date

There’s plenty to talk about in this example, so let’s work through each of the input and output properties in turn:

  • templateEngine

    Represents which engine to use when processing the source templates, e.g. FreeMarker, Velocity, etc. You could implement this as a string, but in this case we have gone for a custom enum as it provides greater type information and safety. Since enums implement Serializable automatically, we can treat this as a simple value and use the @Input annotation, just as we would with a String property.

  • sourceFiles

    The source templates that the task will be processing. Single files and collections of files need their own special annotations. In this case, we’re dealing with a collection of input files and so we use the @InputFiles annotation. You’ll see more file-oriented annotations in a table later.

  • templateData

    For this example, we’re using a custom class to represent the model data. However, it does not implement Serializable, so we can’t use the @Input annotation. That’s not a problem as the properties within TemplateData - a string and a hash map with serializable type parameters - are serializable and can be annotated with @Input. We use @Nested on templateData to let Gradle know that this is a value with nested input properties.

  • outputDir

    The directory where the generated files go. As with input files, there are several annotations for output files and directories. A property representing a single directory requires @OutputDirectory. You’ll learn about the others soon.

These annotated properties mean that Gradle will skip the task if none of the source files, template engine, model data or generated files have changed since the previous time Gradle executed the task. This will often save a significant amount of time. You can learn how Gradle detects changes later.

This example is particularly interesting because it works with collections of source files. What happens if only one source file changes? Does the task process all the source files again or just the modified one? That depends on the task implementation. If the latter, then the task itself is incremental, but that’s a different feature to the one we’re discussing here. Gradle does help task implementers with this via its incremental task inputs feature.

Now that you have seen some of the input and output annotations in practice, let’s take a look at all the annotations available to you and when you should use them. The table below lists the available annotations and the corresponding property type you can use with each one.

Table 6. Incremental build property type annotations

Annotation Expected property type Description

@Input

Any serializable type

A simple input value

@InputFile

File*

A single input file (not directory)

@InputDirectory

File*

A single input directory (not file)

@InputFiles

Iterable<File>*

An iterable of input files and directories

@Classpath

Iterable<File>*

An iterable of input files and directories that represent a Java classpath. This allows the task to ignore irrelevant changes to the property, such as different names for the same files. It is similar to annotating the property @PathSensitive(RELATIVE) but it will ignore the names of JAR files directly added to the classpath, and it will consider changes in the order of the files as a change in the classpath. Gradle will inspect the contents of jar files on the classpath and ignore changes that do not affect the semantics of the classpath (such as file dates and entry order). See also the section called “Using the classpath annotations”.

The @Classpath annotation was introduced in Gradle 3.2. To stay compatible with earlier Gradle versions, classpath properties should also be annotated with @InputFiles.

@CompileClasspath

Iterable<File>*

An iterable of input files and directories that represent a Java compile classpath. This allows the task to ignore irrelevant changes that do not affect the API of the classes in classpath. See also the section called “Using the classpath annotations”.

The following kinds of changes to the classpath will be ignored:

  • Changes to the path of jar or top level directories.

  • Changes to timestamps and the order of entries in Jars.

  • Changes to resources and Jar manifests, including adding or removing resources.

  • Changes to private class elements, such as private fields, methods and inner classes.

  • Changes to code, such as method bodies, static initializers and field initializers (except for constants).

  • Changes to debug information, for example when a change to a comment affects the line numbers in class debug information.

  • Changes to directories, including directory entries in Jars.

The @CompileClasspath annotation was introduced in Gradle 3.4. To stay compatible with Gradle 3.3 and 3.2, compile classpath properties should also be annotated with @Classpath. For compatibility with Gradle versions before 3.2 the property should also be annotated with @InputFiles.

@OutputFile

File*

A single output file (not directory)

@OutputDirectory

File*

A single output directory (not file)

@OutputFiles

Map<String, File>** or Iterable<File>*

An iterable of output files (no directories). The task outputs can only be cached if a Map is provided.

@OutputDirectories

Map<String, File>** or Iterable<File>*

An iterable of output directories (no files). The task outputs can only be cached if a Map is provided.

@Destroys

File or Iterable<File>*

Specifies one or more files that are removed by this task. Note that a task can define either inputs/outputs or destroyables, but not both.

@LocalState

File or Iterable<File>*

Specifies one or more files that represent the local state of the task. These files are removed when the task is loaded from cache.

@Nested

Any custom type

A custom type that may not implement Serializable but does have at least one field or property marked with one of the annotations in this table. It could even be another @Nested.

@Console

Any type

Indicates that the property is neither an input nor an output. It simply affects the console output of the task in some way, such as increasing or decreasing the verbosity of the task.

@Internal

Any type

Indicates that the property is used internally but is neither an input nor an output.

*

In fact, File can be any type accepted by Project.file(java.lang.Object) and Iterable<File> can be any type accepted by Project.files(java.lang.Object[]). This includes instances of Callable, such as closures, allowing for lazy evaluation of the property values. Be aware that the types FileCollection and FileTree are Iterable<File>s.

**

Similar to the above, File can be any type accepted by Project.file(java.lang.Object). The Map itself can be wrapped in Callables, such as closures.

Table 7. Additional annotations used to further qualifying property type annotations

Annotation Description

@SkipWhenEmpty

Used with @InputFiles or @InputDirectory to tell Gradle to skip the task if the corresponding files or directory are empty, along with all other input files declared with this annotation. Tasks that have been skipped due to all of their input files that were declared with this annotation being empty will result in a distinct “no source” outcome. For example, NO-SOURCE will be emitted in the console output.

@Optional

Used with any of the property type annotations listed in the Optional API documentation. This annotation disables validation checks on the corresponding property. See the section on validation for more details.

@PathSensitive

Used with any input file property to tell Gradle to only consider the given part of the file paths as important. For example, if a property is annotated with @PathSensitive(PathSensitivity.NAME_ONLY), then moving the files around without changing their contents will not make the task out-of-date.

Annotations are inherited from all parent types including implemented interfaces. Property type annotations override any other property type annotation declared in a parent type. This way an @InputFile property can be turned into an @InputDirectory property in a child task type.

Annotations on a property declared in a type override similar annotations declared by the superclass and in any implemented interfaces. Superclass annotations take precedence over annotations declared in implemented interfaces.

The Console and Internal annotations in the table are special cases as they don’t declare either task inputs or task outputs. So why use them? It’s so that you can take advantage of the Java Gradle Plugin Development plugin to help you develop and publish your own plugins. This plugin checks whether any properties of your custom task classes lack an incremental build annotation. This protects you from forgetting to add an appropriate annotation during development.

Using the classpath annotations

Besides @InputFiles, for JVM-related tasks Gradle understands the concept of classpath inputs. Both runtime and compile classpaths are treated differently when Gradle is looking for changes.

As opposed to input properties annotated with @InputFiles, for classpath properties the order of the entries in the file collection matter. On the other hand, the names and paths of the directories and jar files on the classpath itself are ignored. Timestamps and the order of class files and resources inside jar files on a classpath are ignored, too, thus recreating a jar file with different file dates will not make the task out of date.

Runtime classpaths are marked with @Classpath, and they offer further customization via classpath normalization.

Input properties annotated with @CompileClasspath are considered Java compile classpaths. Additionally to the aforementioned general classpath rules, compile classpaths ignore changes to everything but class files. Gradle uses the same class analysis described in the section called “Compile avoidance” to further filter changes that don’t affect the class' ABIs. This means that changes which only touch the implementation of classes do not make the task out of date.

Nested inputs

When analyzing @Nested task properties for declared input and output sub-properties Gradle uses the type of the actual value. Hence it can discover all sub-properties declared by a runtime sub-type.

When adding @Nested to an iterable, each element is treated as a separate nested input.

This allows richer modeling and extensibility for tasks, as e.g. shown by CompileOptions.getCompilerArgumentProviders().

For example, to declare annotation processor arguments, it is possible to do the following:

class MyAnnotationProcessor implements CompilerArgumentProvider {
    @InputFile
    @PathSensitivite(NONE)
    File inputFile

    @OutputFile
    File outputFile

    MyAnnotationProcessor(File inputFile, File outputFile) {
        this.inputFile = inputFile
        this.outputFile = outputFile
    }

    @Override
    List<String> asArguments() {
        [
            "-AinputFile=${inputFile.absolutePath}",
            "-AoutputFile=${outputFile.absolutePath}"
        ]
    }
}

compileJava.options.compilerArgumentProviders << new MyAnnotationProcessor(inputFile, outputFile)

This models an annotation processor which requires an input file and generates an output file.

The approach works for Java compiler arguments, since CompileOptions.getCompilerArgumentProviders() is an Iterable annotated with @Nested. In the same way, this kind of modelling is available to custom tasks.

Runtime API

Custom task classes are an easy way to bring your own build logic into the arena of incremental build, but you don’t always have that option. That’s why Gradle also provides an alternative API that can be used with any tasks, which we look at next.

When you don’t have access to the source for a custom task class, there is no way to add any of the annotations we covered in the previous section. Fortunately, Gradle provides a runtime API for scenarios just like that. It can also be used for ad-hoc tasks, as you’ll see next.

Using it for ad-hoc tasks

This runtime API is provided through a couple of aptly named properties that are available on every Gradle task:

These objects have methods that allow you to specify files, directories and values which constitute the task’s inputs and outputs. In fact, the runtime API has almost feature parity with the annotations. All it lacks is validation of whether declared files are actually files and declared directories are directories. Nor will it create output directories if they don’t exist. But that’s it.

Let’s take the template processing example from before and see how it would look as an ad-hoc task that uses the runtime API:

Example 104. Ad-hoc task

build.gradle

task processTemplatesAdHoc {
    inputs.property("engine", TemplateEngineType.FREEMARKER)
    inputs.files(fileTree("src/templates"))
    inputs.property("templateData.name", "docs")
    inputs.property("templateData.variables", [year: 2013])
    outputs.dir("$buildDir/genOutput2")

    doLast {
        // Process the templates here
    }
}

Output of gradle processTemplatesAdHoc

> gradle processTemplatesAdHoc
:processTemplatesAdHoc

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

As before, there’s much to talk about. To begin with, you should really write a custom task class for this as it’s a non-trivial implementation that has several configuration options. In this case, there are no task properties to store the root source folder, the location of the output directory or any of the other settings. That’s deliberate to highlight the fact that the runtime API doesn’t require the task to have any state. In terms of incremental build, the above ad-hoc task will behave the same as the custom task class.

All the input and output definitions are done through the methods on inputs and outputs, such as property(), files(), and dir(). Gradle performs up-to-date checks on the argument values to determine whether the task needs to run again or not. Each method corresponds to one of the incremental build annotations, for example inputs.property() maps to @Input and outputs.dir() maps to @OutputDirectory. The only difference is that the file(), files(), dir() and dirs() methods don’t validate the type of file object at the given path (file or directory), unlike the annotations.

The files that a task removes can be specified through destroyables.register().

Example 105. Ad-hoc task declaring a destroyable

build.gradle

task removeTempDir {
    destroyables.register("$projectDir/tmpDir")
    doLast {
        delete("$projectDir/tmpDir")
    }
}

One notable difference between the runtime API and the annotations is the lack of a method that corresponds directly to @Nested. That’s why the example uses two property() declarations for the template data, one for each TemplateData property. You should utilize the same technique when using the runtime API with nested values. Any given task can either declare destroyables or inputs/outputs, but cannot declare both.

Using it for custom task types

Another type of example involves adding input and output definitions to instances of a custom task class that lacks the requisite annotations. For example, imagine that the ProcessTemplates task is provided by a plugin and that it’s missing the incremental build annotations. In order to make up for that deficiency, you can use the runtime API:

Example 106. Using runtime API with custom task type

build.gradle

task processTemplatesRuntime(type: ProcessTemplatesNoAnnotations) {
    templateEngine = TemplateEngineType.FREEMARKER
    sourceFiles = fileTree("src/templates")
    templateData = new TemplateData("test", [year: 2014])
    outputDir = file("$buildDir/genOutput3")

    inputs.property("engine",templateEngine)
    inputs.files(sourceFiles)
    inputs.property("templateData.name", templateData.name)
    inputs.property("templateData.variables", templateData.variables)
    outputs.dir(outputDir)
}

Output of gradle processTemplatesRuntime

> gradle processTemplatesRuntime
:processTemplatesRuntime

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

Output of gradle processTemplatesRuntime

> gradle processTemplatesRuntime
:processTemplatesRuntime UP-TO-DATE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date

As you can see, we can both configure the tasks properties and use those properties as arguments to the incremental build runtime API. Using the runtime API like this is a little like using doLast() and doFirst() to attach extra actions to a task, except in this case we’re attaching information about inputs and outputs. Note that if the task type is already using the incremental build annotations, the runtime API will add inputs and outputs rather than replace them.

Fine-grained configuration

The runtime API methods only allow you to declare your inputs and outputs in themselves. However, the file-oriented ones return a builder - of type TaskInputFilePropertyBuilder - that lets you provide additional information about those inputs and outputs.

You can learn about all the options provided by the builder in its API documentation, but we’ll show you a simple example here to give you an idea of what you can do.

Let’s say we don’t want to run the processTemplates task if there are no source files, regardless of whether it’s a clean build or not. After all, if there are no source files, there’s nothing for the task to do. The builder allows us to configure this like so:

Example 107. Using skipWhenEmpty() via the runtime API

build.gradle

task processTemplatesRuntimeConf(type: ProcessTemplatesNoAnnotations) {
    // ...
    sourceFiles = fileTree("src/templates") {
        include "**/*.fm"
    }

    inputs.files(sourceFiles).skipWhenEmpty()
    // ...
}

Output of gradle clean processTemplatesRuntimeConf

> gradle clean processTemplatesRuntimeConf
:processTemplatesRuntimeConf NO-SOURCE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date

The TaskInputs.files() method returns a builder that has a skipWhenEmpty() method. Invoking this method is equivalent to annotating to the property with @SkipWhenEmpty.

Prior to Gradle 3.0, you had to use the TaskInputs.source() and TaskInputs.sourceDir() methods to get the same behavior as with skipWhenEmpty(). These methods are now deprecated and should not be used with Gradle 3.0 and above.

Now that you have seen both the annotations and the runtime API, you may be wondering which API you should be using. Our recommendation is to use the annotations wherever possible, and it’s sometimes worth creating a custom task class just so that you can make use of them. The runtime API is more for situations in which you can’t use the annotations.

Important beneficial side effects

Once you declare a task’s formal inputs and outputs, Gradle can then infer things about those properties. For example, if an input of one task is set to the output of another, that means the first task depends on the second, right? Gradle knows this and can act upon it.

We’ll look at this feature next and also some other features that come from Gradle knowing things about inputs and outputs.

Inferred task dependencies

Consider an archive task that packages the output of the processTemplates task. A build author will see that the archive task obviously requires processTemplates to run first and so may add an explicit dependsOn. However, if you define the archive task like so:

Example 108. Inferred task dependency via task outputs

build.gradle

task packageFiles(type: Zip) {
    from processTemplates.outputs
}

Output of gradle clean packageFiles

> gradle clean packageFiles
:processTemplates
:packageFiles

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

Gradle will automatically make packageFiles depend on processTemplates. It can do this because it’s aware that one of the inputs of packageFiles requires the output of the processTemplates task. We call this an inferred task dependency.

The above example can also be written as

Example 109. Inferred task dependency via a task argument

build.gradle

task packageFiles2(type: Zip) {
    from processTemplates
}

Output of gradle clean packageFiles2

> gradle clean packageFiles2
:processTemplates
:packageFiles2

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

This is because the from() method can accept a task object as an argument. Behind the scenes, from() uses the project.files() method to wrap the argument, which in turn exposes the task’s formal outputs as a file collection. In other words, it’s a special case!

Input and output validation

The incremental build annotations provide enough information for Gradle to perform some basic validation on the annotated properties. In particular, it does the following for each property before the task executes:

  • @InputFile - verifies that the property has a value and that the path corresponds to a file (not a directory) that exists.

  • @InputDirectory - same as for @InputFile, except the path must correspond to a directory.

  • @OutputDirectory - verifies that the path doesn’t match a file and also creates the directory if it doesn’t already exist.

Such validation improves the robustness of the build, allowing you to identify issues related to inputs and outputs quickly.

You will occasionally want to disable some of this validation, specifically when an input file may validly not exist. That’s why Gradle provides the @Optional annotation: you use it to tell Gradle that a particular input is optional and therefore the build should not fail if the corresponding file or directory doesn’t exist.

Continuous build

Another benefit of defining task inputs and outputs is continuous build. Since Gradle knows what files a task depends on, it can automatically run a task again if any of its inputs change. By activating continuous build when you run Gradle - through the --continuous or -t options - you will put Gradle into a state in which it continually checks for changes and executes the requested tasks when it encounters such changes.

You can find out more about this feature in Continuous build.

Task parallelism

One last benefit of defining task inputs and outputs is that Gradle can use this information to make decisions about how to run tasks when the "--parallel" option is used. For instance, Gradle will inspect the outputs of tasks when selecting the next task to run and will avoid concurrent execution of tasks that write to the same output directory. Similarly, Gradle will use the information about what files a task destroys (e.g. specified by the Destroys annotation) and avoid running a task that removes a set of files while another task is running that consumes or creates those same files (and vice versa). It can also determine that a task that creates a set of files has already run and that a task that consumes those files has yet to run and will avoid running a task that removes those files in between. By providing task input and output information in this way, Gradle can infer creation/consumption/destruction relationships between tasks and can ensure that task execution does not violate those relationships.

How does it work?

Before a task is executed for the first time, Gradle takes a snapshot of the inputs. This snapshot contains the paths of input files and a hash of the contents of each file. Gradle then executes the task. If the task completes successfully, Gradle takes a snapshot of the outputs. This snapshot contains the set of output files and a hash of the contents of each file. Gradle persists both snapshots for the next time the task is executed.

Each time after that, before the task is executed, Gradle takes a new snapshot of the inputs and outputs. If the new snapshots are the same as the previous snapshots, Gradle assumes that the outputs are up to date and skips the task. If they are not the same, Gradle executes the task. Gradle persists both snapshots for the next time the task is executed.

Gradle also considers the code of the task as part of the inputs to the task. When a task, its actions, or its dependencies change between executions, Gradle considers the task as out-of-date.

Gradle understands if a file property (e.g. one holding a Java classpath) is order-sensitive. When comparing the snapshot of such a property, even a change in the order of the files will result in the task becoming out-of-date.

Note that if a task has an output directory specified, any files added to that directory since the last time it was executed are ignored and will NOT cause the task to be out of date. This is so unrelated tasks may share an output directory without interfering with each other. If this is not the behaviour you want for some reason, consider using TaskOutputs.upToDateWhen(groovy.lang.Closure)

The inputs for the task are also used to calculate the build cache key used to load task outputs when enabled. For more details see the section called “Task Output Caching”.

Advanced techniques

Everything you’ve seen so far in this section will cover most of the use cases you’ll encounter, but there are some scenarios that need special treatment. We’ll present a few of those next with the appropriate solutions.

Adding your own cached input/output methods

Have you ever wondered how the from() method of the Copy task works? It’s not annotated with @InputFiles and yet any files passed to it are treated as formal inputs of the task. What’s happening?

The implementation is quite simple and you can use the same technique for your own tasks to improve their APIs. Write your methods so that they add files directly to the appropriate annotated property. As an example, here’s how to add a sources() method to the custom ProcessTemplates class we introduced earlier:

Example 110. Declaring a method to add task inputs

buildSrc/src/main/java/org/example/ProcessTemplates.java

public class ProcessTemplates extends DefaultTask {
    // ...
    private FileCollection sourceFiles = getProject().files();

    @SkipWhenEmpty
    @InputFiles
    @PathSensitive(PathSensitivity.NONE)
    public FileCollection getSourceFiles() {
        return this.sourceFiles;
    }

    public void sources(FileCollection sourceFiles) {
        this.sourceFiles = this.sourceFiles.plus(sourceFiles);
    }

    // ...
}

build.gradle

task processTemplates(type: ProcessTemplates) {
    templateEngine = TemplateEngineType.FREEMARKER
    templateData = new TemplateData("test", [year: 2012])
    outputDir = file("$buildDir/genOutput")

    sources fileTree("src/templates")
}

Output of gradle processTemplates

> gradle processTemplates
:processTemplates

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

In other words, as long as you add values and files to formal task inputs and outputs during the configuration phase, they will be treated as such regardless from where in the build you add them.

If we want to support tasks as arguments as well and treat their outputs as the inputs, we can use the project.files() method like so:

Example 111. Declaring a method to add a task as an input

buildSrc/src/main/java/org/example/ProcessTemplates.java

// ...
public void sources(Task inputTask) {
    this.sourceFiles = this.sourceFiles.plus(getProject().files(inputTask));
}
// ...

build.gradle

task copyTemplates(type: Copy) {
    into "$buildDir/tmp"
    from "src/templates"
}

task processTemplates2(type: ProcessTemplates) {
    // ...
    sources copyTemplates
}

Output of gradle processTemplates2

> gradle processTemplates2
:copyTemplates
:processTemplates2

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

This technique can make your custom task easier to use and result in cleaner build files. As an added benefit, our use of getProject().files() means that our custom method can set up an inferred task dependency.

One last thing to note: if you are developing a task that takes collections of source files as inputs, like this example, consider using the built-in SourceTask. It will save you having to implement some of the plumbing that we put into ProcessTemplates.

Linking an @OutputDirectory to an @InputFiles

When you want to link the output of one task to the input of another, the types often match and a simple property assignment will provide that link. For example, a File output property can be assigned to a File input.

Unfortunately, this approach breaks down when you want the files in a task’s @OutputDirectory (of type File) to become the source for another task’s @InputFiles property (of type FileCollection). Since the two have different types, property assignment won’t work.

As an example, imagine you want to use the output of a Java compilation task - via the destinationDir property - as the input of a custom task that instruments a set of files containing Java bytecode. This custom task, which we’ll call Instrument, has a classFiles property annotated with @InputFiles. You might initially try to configure the task like so:

Example 112. Failed attempt at setting up an inferred task dependency

build.gradle

apply plugin: "java"

task badInstrumentClasses(type: Instrument) {
    classFiles = fileTree(compileJava.destinationDir)
    destinationDir = file("$buildDir/instrumented")
}

Output of gradle clean badInstrumentClasses

> gradle clean badInstrumentClasses
:clean UP-TO-DATE
:badInstrumentClasses NO-SOURCE

BUILD SUCCESSFUL in 0s
1 actionable task: 1 up-to-date

There’s nothing obviously wrong with this code, but you can see from the console output that the compilation task is missing. In this case you would need to add an explicit task dependency between instrumentClasses and compileJava via dependsOn. The use of fileTree() means that Gradle can’t infer the task dependency itself.

One solution is to use the TaskOutputs.files property, as demonstrated by the following example:

Example 113. Setting up an inferred task dependency between output dir and input files

build.gradle

task instrumentClasses(type: Instrument) {
    classFiles = compileJava.outputs.files
    destinationDir = file("$buildDir/instrumented")
}

Output of gradle clean instrumentClasses

> gradle clean instrumentClasses
:clean UP-TO-DATE
:compileJava
:instrumentClasses

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

Alternatively, you can get Gradle to access the appropriate property itself by using the project.files() method in place of project.fileTree():

Example 114. Setting up an inferred task dependency with files()

build.gradle

task instrumentClasses2(type: Instrument) {
    classFiles = files(compileJava)
    destinationDir = file("$buildDir/instrumented")
}

Output of gradle clean instrumentClasses2

> gradle clean instrumentClasses2
:clean UP-TO-DATE
:compileJava
:instrumentClasses2

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

Remember that files() can take tasks as arguments, whereas fileTree() cannot.

The downside of this approach is that all file outputs of the source task become the input files of the target - instrumentClasses in this case. That’s fine as long as the source task only has a single file-based output, like the JavaCompile task. But if you have to link just one output property among several, then you need to explicitly tell Gradle which task generates the input files using the builtBy method:

Example 115. Setting up an inferred task dependency with builtBy()

build.gradle

task instrumentClassesBuiltBy(type: Instrument) {
    classFiles = fileTree(compileJava.destinationDir) {
        builtBy compileJava
    }
    destinationDir = file("$buildDir/instrumented")
}

Output of gradle clean instrumentClassesBuiltBy

> gradle clean instrumentClassesBuiltBy
:clean UP-TO-DATE
:compileJava
:instrumentClassesBuiltBy

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

You can of course just add an explicit task dependency via dependsOn, but the above approach provides more semantic meaning, explaining why compileJava has to run beforehand.

Providing custom up-to-date logic

Gradle automatically handles up-to-date checks for output files and directories, but what if the task output is something else entirely? Perhaps it’s an update to a web service or a database table. Gradle has no way of knowing how to check whether the task is up to date in such cases.

That’s where the upToDateWhen() method on TaskOutputs comes in. This takes a predicate function that is used to determine whether a task is up to date or not. One use case is to disable up-to-date checks completely for a task, like so:

Example 116. Ignoring up-to-date checks

build.gradle

task alwaysInstrumentClasses(type: Instrument) {
    classFiles = files(compileJava)
    destinationDir = file("$buildDir/instrumented")
    outputs.upToDateWhen { false }
}

Output of gradle clean alwaysInstrumentClasses

> gradle clean alwaysInstrumentClasses
:compileJava
:alwaysInstrumentClasses

BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date

Output of gradle alwaysInstrumentClasses

> gradle alwaysInstrumentClasses
:compileJava UP-TO-DATE
:alwaysInstrumentClasses

BUILD SUCCESSFUL in 0s
2 actionable tasks: 1 executed, 1 up-to-date

The { false } closure ensures that copyResources will always perform the copy, irrespective of whether there is no change in the inputs or outputs.

You can of course put more complex logic into the closure. You could check whether a particular record in a database table exists or has changed for example. Just be aware that up-to-date checks should save you time. Don’t add checks that cost as much or more time than the standard execution of the task. In fact, if a task ends up running frequently anyway, because it’s rarely up to date, then it may not be worth having an up-to-date check at all. Remember that your checks will always run if the task is in the execution task graph.

One common mistake is to use upToDateWhen() instead of Task.onlyIf(). If you want to skip a task on the basis of some condition unrelated to the task inputs and outputs, then you should use onlyIf(). For example, in cases where you want to skip a task when a particular property is set or not set.

Configure input normalization

For up to date checks and the build cache Gradle needs to determine if two task input properties have the same value. In order to do so, Gradle first normalizes both inputs and then compares the result. For example, for a compile classpath, Gradle extracts the ABI signature from the classes on the classpath and then compares signatures between the last Gradle run and the current Gradle run as described in the section called “Compile avoidance”.

It is possible to customize Gradle’s built-in strategy for runtime classpath normalization. All inputs annotated with @Classpath are considered to be runtime classpaths.

Let’s say you want to add a file build-info.properties to all your produced jar files which contains information about the build, e.g. the timestamp when the build started or some ID to identify the CI job that published the artifact. This file is only for auditing purposes, and has no effect on the outcome of running tests. Nonetheless, this file is part of the runtime classpath for the test task and changes on every build invocation. Therefore, the test would be never up-to-date or pulled from the build cache. In order to benefit from incremental builds again, you are able tell Gradle to ignore this file on the runtime classpath at the project level by using Project.normalization(org.gradle.api.Action):

Example 117. Runtime classpath normalization

build.gradle

normalization {
    runtimeClasspath {
        ignore 'build-info.properties'
    }
}

The effect of this configuration would be that changes to build-info.properties would be ignored for up-to-date checks and build cache key calculations. Note that this will not change the runtime behavior of the test task - i.e. any test is still able to load build-info.properties and the runtime classpath is still the same as before.

Stale task outputs

When the Gradle version changes, Gradle detects that outputs from tasks that ran with older versions of Gradle need to be removed to ensure that the newest version of the tasks are starting from a known clean state.

Automatic clean-up of stale output directories has only been implemented for the output of source sets (Java/Groovy/Scala compilation).

Task rules

Sometimes you want to have a task whose behavior depends on a large or infinite number value range of parameters. A very nice and expressive way to provide such tasks are task rules:

Example 118. Task rule

build.gradle

tasks.addRule("Pattern: ping<ID>") { String taskName ->
    if (taskName.startsWith("ping")) {
        task(taskName) {
            doLast {
                println "Pinging: " + (taskName - 'ping')
            }
        }
    }
}

Output of gradle -q pingServer1

> gradle -q pingServer1
Pinging: Server1

The String parameter is used as a description for the rule, which is shown with gradle tasks.

Rules are not only used when calling tasks from the command line. You can also create dependsOn relations on rule based tasks:

Example 119. Dependency on rule based tasks

build.gradle

tasks.addRule("Pattern: ping<ID>") { String taskName ->
    if (taskName.startsWith("ping")) {
        task(taskName) {
            doLast {
                println "Pinging: " + (taskName - 'ping')
            }
        }
    }
}

task groupPing {
    dependsOn pingServer1, pingServer2
}

Output of gradle -q groupPing

> gradle -q groupPing
Pinging: Server1
Pinging: Server2

If you run “gradle -q tasks” you won’t find a task named “pingServer1” or “pingServer2”, but this script is executing logic based on the request to run those tasks.

Finalizer tasks

Finalizers tasks are an incubating feature (see the section called “Incubating”).

Finalizer tasks are automatically added to the task graph when the finalized task is scheduled to run.

Example 120. Adding a task finalizer

build.gradle

task taskX {
    doLast {
        println 'taskX'
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}

taskX.finalizedBy taskY

Output of gradle -q taskX

> gradle -q taskX
taskX
taskY

Finalizer tasks will be executed even if the finalized task fails.

Example 121. Task finalizer for a failing task

build.gradle

task taskX {
    doLast {
        println 'taskX'
        throw new RuntimeException()
    }
}
task taskY {
    doLast {
        println 'taskY'
    }
}

taskX.finalizedBy taskY

Output of gradle -q taskX

> gradle -q taskX
taskX
taskY

On the other hand, finalizer tasks are not executed if the finalized task didn’t do any work, for example if it is considered up to date or if a dependent task fails.

Finalizer tasks are useful in situations where the build creates a resource that has to be cleaned up regardless of the build failing or succeeding. An example of such a resource is a web container that is started before an integration test task and which should be always shut down, even if some of the tests fail.

To specify a finalizer task you use the Task.finalizedBy(java.lang.Object[]) method. This method accepts a task instance, a task name, or any other input accepted by Task.dependsOn(java.lang.Object[]).

Lifecycle tasks

Lifecycle tasks are tasks that do not do work themselves. They typically do not have any task actions. Lifecycle tasks can represent several concepts:

  • a work-flow step (e.g., run all checks with check)

  • a buildable thing (e.g., create a debug 32-bit executable for native components with debug32MainExecutable)

  • a convenience task to execute many of the same logical tasks (e.g., run all compilation tasks with compileAll)

Many Gradle plug-ins define their own lifecycle tasks to make it convenient to do specific things. When developing your own plugins, you should consider using your own lifecycle tasks or hooking into some of the tasks already provided by Gradle. See the Java plugin the section called “Tasks” for an example.

Unless a lifecycle task has actions, its outcome is determined by its dependencies. If any of the task’s dependencies are executed, the lifecycle task will be considered executed. If all of the task’s dependencies are up-to-date, skipped or from cache, the lifecycle task will be considered up-to-date.

Summary

If you are coming from Ant, an enhanced Gradle task like Copy seems like a cross between an Ant target and an Ant task. Although Ant’s tasks and targets are really different entities, Gradle combines these notions into a single entity. Simple Gradle tasks are like Ant’s targets, but enhanced Gradle tasks also include aspects of Ant tasks. All of Gradle’s tasks share a common API and you can create dependencies between them. These tasks are much easier to configure than an Ant task. They make full use of the type system, and are more expressive and easier to maintain.



[6] You might be wondering why there is neither an import for the StopExecutionException nor do we access it via its fully qualified name. The reason is, that Gradle adds a set of default imports to your script (see the section called “Default imports”).

Working With Files

Most builds work with files. Gradle adds some concepts and APIs to help you achieve this.

Locating files

You can locate a file relative to the project directory using the Project.file(java.lang.Object) method.

Example 122. Locating files

build.gradle

// Using a relative path
File configFile = file('src/config.xml')

// Using an absolute path
configFile = file(configFile.absolutePath)

// Using a File object with a relative path
configFile = file(new File('src/config.xml'))

// Using a java.nio.file.Path object with a relative path
configFile = file(Paths.get('src', 'config.xml'))

// Using an absolute java.nio.file.Path object
configFile = file(Paths.get(System.getProperty('user.home')).resolve('global-config.xml'))

You can pass any object to the file() method, and it will attempt to convert the value to an absolute File object. Usually, you would pass it a String, File or Path instance. If this path is an absolute path, it is used to construct a File instance. Otherwise, a File instance is constructed by prepending the project directory path to the supplied path. The file() method also understands URLs, such as file:/some/path.xml.

Using this method is a useful way to convert some user provided value into an absolute File. It is preferable to using new File(somePath), as file() always evaluates the supplied path relative to the project directory, which is fixed, rather than the current working directory, which can change depending on how the user runs Gradle.

File collections

A file collection is simply a set of files. It is represented by the FileCollection interface. Many objects in the Gradle API implement this interface. For example, dependency configurations implement FileCollection.

One way to obtain a FileCollection instance is to use the Project.files(java.lang.Object[]) method. You can pass this method any number of objects, which are then converted into a set of File objects. The files() method accepts any type of object as its parameters. These are evaluated relative to the project directory, as per the file() method, described in the section called “Locating files”. You can also pass collections, iterables, maps and arrays to the files() method. These are flattened and the contents converted to File instances.

Example 123. Creating a file collection

build.gradle

FileCollection collection = files('src/file1.txt',
                                  new File('src/file2.txt'),
                                  ['src/file3.txt', 'src/file4.txt'],
                                  Paths.get('src', 'file5.txt'))

A file collection is iterable, and can be converted to a number of other types using the as operator. You can also add 2 file collections together using the + operator, or subtract one file collection from another using the - operator. Here are some examples of what you can do with a file collection.

Example 124. Using a file collection

build.gradle

// Iterate over the files in the collection
collection.each { File file ->
    println file.name
}

// Convert the collection to various types
Set set = collection.files
Set set2 = collection as Set
List list = collection as List
String path = collection.asPath
File file = collection.singleFile
File file2 = collection as File

// Add and subtract collections
def union = collection + files('src/file3.txt')
def different = collection - files('src/file3.txt')


You can also pass the files() method a closure or a Callable instance. This is called when the contents of the collection are queried, and its return value is converted to a set of File instances. The return value can be an object of any of the types supported by the files() method. This is a simple way to 'implement' the FileCollection interface.

Example 125. Implementing a file collection

build.gradle

task list {
    doLast {
        File srcDir

        // Create a file collection using a closure
        collection = files { srcDir.listFiles() }

        srcDir = file('src')
        println "Contents of $srcDir.name"
        collection.collect { relativePath(it) }.sort().each { println it }

        srcDir = file('src2')
        println "Contents of $srcDir.name"
        collection.collect { relativePath(it) }.sort().each { println it }
    }
}

Output of gradle -q list

> gradle -q list
Contents of src
src/dir1
src/file1.txt
Contents of src2
src2/dir1
src2/dir2

Some other types of things you can pass to files():

FileCollection

These are flattened and the contents included in the file collection.

Task

The output files of the task are included in the file collection.

TaskOutputs

The output files of the TaskOutputs are included in the file collection.

It is important to note that the content of a file collection is evaluated lazily, when it is needed. This means you can, for example, create a FileCollection that represents files which will be created in the future by, say, some task.

File trees

A file tree is a collection of files arranged in a hierarchy. For example, a file tree might represent a directory tree or the contents of a ZIP file. It is represented by the FileTree interface. The FileTree interface extends FileCollection, so you can treat a file tree exactly the same way as you would a file collection. Several objects in Gradle implement the FileTree interface, such as source sets.

One way to obtain a FileTree instance is to use the Project.fileTree(java.util.Map) method. This creates a FileTree defined with a base directory, and optionally some Ant-style include and exclude patterns.

Example 126. Creating a file tree

build.gradle

// Create a file tree with a base directory
FileTree tree = fileTree(dir: 'src/main')

// Add include and exclude patterns to the tree
tree.include '**/*.java'
tree.exclude '**/Abstract*'

// Create a tree using path
tree = fileTree('src').include('**/*.java')

// Create a tree using closure
tree = fileTree('src') {
    include '**/*.java'
}

// Create a tree using a map
tree = fileTree(dir: 'src', include: '**/*.java')
tree = fileTree(dir: 'src', includes: ['**/*.java', '**/*.xml'])
tree = fileTree(dir: 'src', include: '**/*.java', exclude: '**/*test*/**')

You use a file tree in the same way you use a file collection. You can also visit the contents of the tree, and select a sub-tree using Ant-style patterns:

Example 127. Using a file tree

build.gradle

// Iterate over the contents of a tree
tree.each {File file ->
    println file
}

// Filter a tree
FileTree filtered = tree.matching {
    include 'org/gradle/api/**'
}

// Add trees together
FileTree sum = tree + fileTree(dir: 'src/test')

// Visit the elements of the tree
tree.visit {element ->
    println "$element.relativePath => $element.file"
}

By default, the FileTree instance fileTree() returns will apply some Ant-style default exclude patterns for convenience. For the complete default exclusion list, see Default Excludes.

Using the contents of an archive as a file tree

You can use the contents of an archive, such as a ZIP or TAR file, as a file tree. You do this using the Project.zipTree(java.lang.Object) and Project.tarTree(java.lang.Object) methods. These methods return a FileTree instance which you can use like any other file tree or file collection. For example, you can use it to expand the archive by copying the contents, or to merge some archives into another.

Example 128. Using an archive as a file tree

build.gradle

// Create a ZIP file tree using path
FileTree zip = zipTree('someFile.zip')

// Create a TAR file tree using path
FileTree tar = tarTree('someFile.tar')

//tar tree attempts to guess the compression based on the file extension
//however if you must specify the compression explicitly you can:
FileTree someTar = tarTree(resources.gzip('someTar.ext'))


Specifying a set of input files

Many objects in Gradle have properties which accept a set of input files. For example, the JavaCompile task has a source property, which defines the source files to compile. You can set the value of this property using any of the types supported by the files() method, which was shown above. This means you can set the property using, for example, a File, String, collection, FileCollection or even a closure. Here are some examples:

Example 129. Specifying a set of files

build.gradle

task compile(type: JavaCompile)

// Use a File object to specify the source directory
compile {
    source = file('src/main/java')
}

// Use a String path to specify the source directory
compile {
    source = 'src/main/java'
}

// Use a collection to specify multiple source directories
compile {
    source = ['src/main/java', '../shared/java']
}

// Use a FileCollection (or FileTree in this case) to specify the source files
compile {
    source = fileTree(dir: 'src/main/java').matching { include 'org/gradle/api/**' }
}

// Using a closure to specify the source files.
compile {
    source = {
        // Use the contents of each zip file in the src dir
        file('src').listFiles().findAll {it.name.endsWith('.zip')}.collect { zipTree(it) }
    }
}

Usually, there is a method with the same name as the property, which appends to the set of files. Again, this method accepts any of the types supported by the files() method.

Example 130. Appending a set of files

build.gradle

compile {
    // Add some source directories use String paths
    source 'src/main/java', 'src/main/groovy'

    // Add a source directory using a File object
    source file('../shared/java')

    // Add some source directories using a closure
    source { file('src/test/').listFiles() }
}

Copying files

You can use the Copy task to copy files. The copy task is very flexible, and allows you to, for example, filter the contents of the files as they are copied, and map to the file names.

To use the Copy task, you must provide a set of source files to copy, and a destination directory to copy the files to. You may also specify how to transform the files as they are copied. You do all this using a copy spec. A copy spec is represented by the CopySpec interface. The Copy task implements this interface. You specify the source files using the CopySpec.from(java.lang.Object[]) method. To specify the destination directory, use the CopySpec.into(java.lang.Object) method.

Example 131. Copying files using the copy task

build.gradle

task copyTask(type: Copy) {
    from 'src/main/webapp'
    into 'build/explodedWar'
}

The from() method accepts any of the arguments that the files() method does. When an argument resolves to a directory, everything under that directory (but not the directory itself) is recursively copied into the destination directory. When an argument resolves to a file, that file is copied into the destination directory. When an argument resolves to a non-existing file, that argument is ignored. If the argument is a task, the output files (i.e. the files the task creates) of the task are copied and the task is automatically added as a dependency of the Copy task. The into() accepts any of the arguments that the file() method does. Here is another example:

Example 132. Specifying copy task source files and destination directory

build.gradle

task anotherCopyTask(type: Copy) {
    // Copy everything under src/main/webapp
    from 'src/main/webapp'
    // Copy a single file
    from 'src/staging/index.html'
    // Copy the output of a task
    from copyTask
    // Copy the output of a task using Task outputs explicitly.
    from copyTaskWithPatterns.outputs
    // Copy the contents of a Zip file
    from zipTree('src/main/assets.zip')
    // Determine the destination directory later
    into { getDestDir() }
}

You can select the files to copy using Ant-style include or exclude patterns, or using a closure:

Example 133. Selecting the files to copy

build.gradle

task copyTaskWithPatterns(type: Copy) {
    from 'src/main/webapp'
    into 'build/explodedWar'
    include '**/*.html'
    include '**/*.jsp'
    exclude { details -> details.file.name.endsWith('.html') &&
                         details.file.text.contains('staging') }
}

You can also use the Project.copy(org.gradle.api.Action) method to copy files. It works the same way as the task with some major limitations though. First, the copy() is not incremental (see the section called “Up-to-date checks (AKA Incremental Build)”).

Example 134. Copying files using the copy() method without up-to-date check

build.gradle

task copyMethod {
    doLast {
        copy {
            from 'src/main/webapp'
            into 'build/explodedWar'
            include '**/*.html'
            include '**/*.jsp'
        }
    }
}

Secondly, the copy() method cannot honor task dependencies when a task is used as a copy source (i.e. as an argument to from()) because it’s a method and not a task. As such, if you are using the copy() method as part of a task action, you must explicitly declare all inputs and outputs in order to get the correct behavior.

Example 135. Copying files using the copy() method with up-to-date check

build.gradle

task copyMethodWithExplicitDependencies{
    // up-to-date check for inputs, plus add copyTask as dependency
    inputs.files copyTask
    outputs.dir 'some-dir' // up-to-date check for outputs
    doLast{
        copy {
            // Copy the output of copyTask
            from copyTask
            into 'some-dir'
        }
    }
}

It is preferable to use the Copy task wherever possible, as it supports incremental building and task dependency inference without any extra effort on your part. The copy() method can be used to copy files as part of a task’s implementation. That is, the copy method is intended to be used by custom tasks (see Writing Custom Task Classes) that need to copy files as part of their function. In such a scenario, the custom task should sufficiently declare the inputs/outputs relevant to the copy action.

Renaming files

Example 136. Renaming files as they are copied

build.gradle

task rename(type: Copy) {
    from 'src/main/webapp'
    into 'build/explodedWar'
    // Use a closure to map the file name
    rename { String fileName ->
        fileName.replace('-staging-', '')
    }
    // Use a regular expression to map the file name
    rename '(.+)-staging-(.+)', '$1$2'
    rename(/(.+)-staging-(.+)/, '$1$2')
}

Filtering files

Example 137. Filtering files as they are copied

build.gradle

import org.apache.tools.ant.filters.FixCrLfFilter
import org.apache.tools.ant.filters.ReplaceTokens

task filter(type: Copy) {
    from 'src/main/webapp'
    into 'build/explodedWar'
    // Substitute property tokens in files
    expand(copyright: '2009', version: '2.3.1')
    expand(project.properties)
    // Use some of the filters provided by Ant
    filter(FixCrLfFilter)
    filter(ReplaceTokens, tokens: [copyright: '2009', version: '2.3.1'])
    // Use a closure to filter each line
    filter { String line ->
        "[$line]"
    }
    // Use a closure to remove lines
    filter { String line ->
        line.startsWith('-') ? null : line
    }
    filteringCharset = 'UTF-8'
}

When you use the ReplaceTokens class with the “filter” operation, the result is a template engine that replaces tokens of the form “@tokenName@” (the Apache Ant-style token) with a set of given values. The “expand” operation does the same thing except it treats the source files as Groovy templates in which tokens take the form “${tokenName}”. Be aware that you may need to escape parts of your source files when using this option, for example if it contains literal “$” or “<%” strings.

It’s a good practice to specify the charset when reading and writing the file, using the filteringCharset property. If not specified, the JVM default charset is used, which might not match with the actual charset of the files to filter, and might be different from one machine to another.

Using the CopySpec class

Copy specs form a hierarchy. A copy spec inherits its destination path, include patterns, exclude patterns, copy actions, name mappings and filters.

Example 138. Nested copy specs

build.gradle

task nestedSpecs(type: Copy) {
    into 'build/explodedWar'
    exclude '**/*staging*'
    from('src/dist') {
        include '**/*.html'
    }
    into('libs') {
        from configurations.runtime
    }
}

Using the Sync task

The Sync task extends the Copy task. When it executes, it copies the source files into the destination directory, and then removes any files from the destination directory which it did not copy. This can be useful for doing things such as installing your application, creating an exploded copy of your archives, or maintaining a copy of the project’s dependencies.

Here is an example which maintains a copy of the project’s runtime dependencies in the build/libs directory.

Example 139. Using the Sync task to copy dependencies

build.gradle

task libs(type: Sync) {
    from configurations.runtime
    into "$buildDir/libs"
}

Creating archives

A project can have as many JAR archives as you want. You can also add WAR, ZIP and TAR archives to your project. Archives are created using the various archive tasks: Zip, Tar, Jar, War, and Ear. They all work the same way, so let’s look at how you create a ZIP file.

Example 140. Creating a ZIP archive

build.gradle

apply plugin: 'java'

task zip(type: Zip) {
    from 'src/dist'
    into('libs') {
        from configurations.runtime
    }
}

Why are you using the Java plugin?

The Java plugin adds a number of default values for the archive tasks. You can use the archive tasks without using the Java plugin, if you like. You will need to provide values for some additional properties.

The archive tasks all work exactly the same way as the Copy task, and implement the same CopySpec interface. As with the Copy task, you specify the input files using the from() method, and can optionally specify where they end up in the archive using the into() method. You can filter the contents of file, rename files, and all the other things you can do with a copy spec.

Archive naming

The format of projectName-version.type is used for generated archive file names. For example:

Example 141. Creation of ZIP archive

build.gradle

apply plugin: 'java'

version = 1.0

task myZip(type: Zip) {
    from 'somedir'
}

println myZip.archiveName
println relativePath(myZip.destinationDir)
println relativePath(myZip.archivePath)

Output of gradle -q myZip

> gradle -q myZip
zipProject-1.0.zip
build/distributions
build/distributions/zipProject-1.0.zip

This adds a Zip archive task with the name myZip which produces ZIP file zipProject-1.0.zip. It is important to distinguish between the name of the archive task and the name of the archive generated by the archive task. The default name for archives can be changed with the archivesBaseName project property. The name of the archive can also be changed at any time later on.

There are a number of properties which you can set on an archive task. These are listed below in Table 8. You can, for example, change the name of the archive:

Example 142. Configuration of archive task - custom archive name

build.gradle

apply plugin: 'java'
version = 1.0

task myZip(type: Zip) {
    from 'somedir'
    baseName = 'customName'
}

println myZip.archiveName

Output of gradle -q myZip

> gradle -q myZip
customName-1.0.zip

You can further customize the archive names:

Example 143. Configuration of archive task - appendix & classifier

build.gradle

apply plugin: 'java'
archivesBaseName = 'gradle'
version = 1.0

task myZip(type: Zip) {
    appendix = 'wrapper'
    classifier = 'src'
    from 'somedir'
}

println myZip.archiveName

Output of gradle -q myZip

> gradle -q myZip
gradle-wrapper-1.0-src.zip

Table 8. Archive tasks - naming properties

Property name Type Default value Description

archiveName

String

baseName-appendix-version-classifier.extension

If any of these properties is empty the trailing - is not added to the name.

The base file name of the generated archive

archivePath

File

destinationDir/archiveName

The absolute path of the generated archive.

destinationDir

File

Depends on the archive type. JARs and WARs go into project.buildDir/libraries. ZIPs and TARs go into project.buildDir/distributions.

The directory to generate the archive into

baseName

String

project.name

The base name portion of the archive file name.

appendix

String

null

The appendix portion of the archive file name.

version

String

project.version

The version portion of the archive file name.

classifier

String

null

The classifier portion of the archive file name,

extension

String

Depends on the archive type, and for TAR files, the compression type as well: zip, jar, war, tar, tgz or tbz2.

The extension of the archive file name.

Sharing content between multiple archives

You can use the Project.copySpec(org.gradle.api.Action) method to share content between archives.

Reproducible archives

Sometimes it can be desirable to recreate archives in a byte for byte way on different machines. You want to be sure that building an artifact from source code produces the same result, byte for byte, no matter when and where it is built. This is necessary for projects like reproducible-builds.org.

Reproducing the same archive byte for byte poses some challenges since the order of the files in an archive is influenced by the underlying filesystem. Each time a zip, tar, jar, war or ear is built from source, the order of the files inside the archive may change. Files that only have a different timestamp also causes archives to be slightly different between builds. All AbstractArchiveTask (e.g. Jar, Zip) tasks shipped with Gradle include incubating support producing reproducible archives.

For example, to make a Zip task reproducible you need to set Zip.isReproducibleFileOrder() to true and Zip.isPreserveFileTimestamps() to false. In order to make all archive tasks in your build reproducible, consider adding the following configuration to your build file:

Example 144. Activating reproducible archives

build.gradle

tasks.withType(AbstractArchiveTask) {
    preserveFileTimestamps = false
    reproducibleFileOrder = true
}

Often you will want to publish an archive, so that it is usable from another project. This process is described in Publishing artifacts

Properties files

Properties files are used in many places during Java development. Gradle makes it easy to create properties files as a normal part of the build. You can use the WriteProperties task to create properties files.

The WriteProperties task also fixes a well-known problem with Properties.store() that can reduce the usefulness of incremental builds (see the section called “Up-to-date checks (AKA Incremental Build)”). The standard Java way to write a properties file produces a unique file every time, even when the same properties and values are used, because it includes a timestamp in the comments. Gradle’s WriteProperties task generates exactly the same output byte-for-byte if none of the properties have changed. This is achieved by a few tweaks to how a properties file is generated:

  • no timestamp comment is added to the output

  • the line separator is system independent, but can be configured explicitly (it defaults to '\n')

  • the properties are sorted alphabetically

Using Ant from Gradle

Gradle provides excellent integration with Ant. You can use individual Ant tasks or entire Ant builds in your Gradle builds. In fact, you will find that it’s far easier and more powerful using Ant tasks in a Gradle build script, than it is to use Ant’s XML format. You could even use Gradle simply as a powerful Ant task scripting tool.

Ant can be divided into two layers. The first layer is the Ant language. It provides the syntax for the build.xml file, the handling of the targets, special constructs like macrodefs, and so on. In other words, everything except the Ant tasks and types. Gradle understands this language, and allows you to import your Ant build.xml directly into a Gradle project. You can then use the targets of your Ant build as if they were Gradle tasks.

The second layer of Ant is its wealth of Ant tasks and types, like javac, copy or jar. For this layer Gradle provides integration simply by relying on Groovy, and the fantastic AntBuilder.

Finally, since build scripts are Groovy scripts, you can always execute an Ant build as an external process. Your build script may contain statements like: "ant clean compile".execute().[7]

You can use Gradle’s Ant integration as a path for migrating your build from Ant to Gradle. For example, you could start by importing your existing Ant build. Then you could move your dependency declarations from the Ant script to your build file. Finally, you could move your tasks across to your build file, or replace them with some of Gradle’s plugins. This process can be done in parts over time, and you can have a working Gradle build during the entire process.

Using Ant tasks and types in your build

In your build script, a property called ant is provided by Gradle. This is a reference to an AntBuilder instance. This AntBuilder is used to access Ant tasks, types and properties from your build script. There is a very simple mapping from Ant’s build.xml format to Groovy, which is explained below.

You execute an Ant task by calling a method on the AntBuilder instance. You use the task name as the method name. For example, you execute the Ant echo task by calling the ant.echo() method. The attributes of the Ant task are passed as Map parameters to the method. Below is an example of the echo task. Notice that we can also mix Groovy code and the Ant task markup. This can be extremely powerful.

Example 145. Using an Ant task

build.gradle

task hello {
    doLast {
        String greeting = 'hello from Ant'
        ant.echo(message: greeting)
    }
}

Output of gradle hello

> gradle hello
:hello
[ant:echo] hello from Ant

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

You pass nested text to an Ant task by passing it as a parameter of the task method call. In this example, we pass the message for the echo task as nested text:

Example 146. Passing nested text to an Ant task

build.gradle

task hello {
    doLast {
        ant.echo('hello from Ant')
    }
}

Output of gradle hello

> gradle hello
:hello
[ant:echo] hello from Ant

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

You pass nested elements to an Ant task inside a closure. Nested elements are defined in the same way as tasks, by calling a method with the same name as the element we want to define.

Example 147. Passing nested elements to an Ant task

build.gradle

task zip {
    doLast {
        ant.zip(destfile: 'archive.zip') {
            fileset(dir: 'src') {
                include(name: '**.xml')
                exclude(name: '**.java')
            }
        }
    }
}

You can access Ant types in the same way that you access tasks, using the name of the type as the method name. The method call returns the Ant data type, which you can then use directly in your build script. In the following example, we create an Ant path object, then iterate over the contents of it.

Example 148. Using an Ant type

build.gradle

task list {
    doLast {
        def path = ant.path {
            fileset(dir: 'libs', includes: '*.jar')
        }
        path.list().each {
            println it
        }
    }
}

More information about AntBuilder can be found in 'Groovy in Action' 8.4 or at the Groovy Wiki

Using custom Ant tasks in your build

To make custom tasks available in your build, you can use the taskdef (usually easier) or typedef Ant task, just as you would in a build.xml file. You can then refer to the custom Ant task as you would a built-in Ant task.

Example 149. Using a custom Ant task

build.gradle

task check {
    doLast {
        ant.taskdef(resource: 'checkstyletask.properties') {
            classpath {
                fileset(dir: 'libs', includes: '*.jar')
            }
        }
        ant.checkstyle(config: 'checkstyle.xml') {
            fileset(dir: 'src')
        }
    }
}

You can use Gradle’s dependency management to assemble the classpath to use for the custom tasks. To do this, you need to define a custom configuration for the classpath, then add some dependencies to the configuration. This is described in more detail in the section called “Declaring dependencies”.

Example 150. Declaring the classpath for a custom Ant task

build.gradle

configurations {
    pmd
}

dependencies {
    pmd group: 'pmd', name: 'pmd', version: '4.2.5'
}

To use the classpath configuration, use the asPath property of the custom configuration.

Example 151. Using a custom Ant task and dependency management together

build.gradle

task check {
    doLast {
        ant.taskdef(name: 'pmd',
                    classname: 'net.sourceforge.pmd.ant.PMDTask',
                    classpath: configurations.pmd.asPath)
        ant.pmd(shortFilenames: 'true',
                failonruleviolation: 'true',
                rulesetfiles: file('pmd-rules.xml').toURI().toString()) {
            formatter(type: 'text', toConsole: 'true')
            fileset(dir: 'src')
        }
    }
}

Importing an Ant build

You can use the ant.importBuild() method to import an Ant build into your Gradle project. When you import an Ant build, each Ant target is treated as a Gradle task. This means you can manipulate and execute the Ant targets in exactly the same way as Gradle tasks.

Example 152. Importing an Ant build

build.gradle

ant.importBuild 'build.xml'

build.xml

<project>
    <target name="hello">
        <echo>Hello, from Ant</echo>
    </target>
</project>

Output of gradle hello

> gradle hello
:hello
[ant:echo] Hello, from Ant

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

You can add a task which depends on an Ant target:

Example 153. Task that depends on Ant target

build.gradle

ant.importBuild 'build.xml'

task intro(dependsOn: hello) {
    doLast {
        println 'Hello, from Gradle'
    }
}

Output of gradle intro

> gradle intro
:hello
[ant:echo] Hello, from Ant
:intro
Hello, from Gradle

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

Or, you can add behaviour to an Ant target:

Example 154. Adding behaviour to an Ant target

build.gradle

ant.importBuild 'build.xml'

hello {
    doLast {
        println 'Hello, from Gradle'
    }
}

Output of gradle hello

> gradle hello
:hello
[ant:echo] Hello, from Ant
Hello, from Gradle

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

It is also possible for an Ant target to depend on a Gradle task:

Example 155. Ant target that depends on Gradle task

build.gradle

ant.importBuild 'build.xml'

task intro {
    doLast {
        println 'Hello, from Gradle'
    }
}

build.xml

<project>
    <target name="hello" depends="intro">
        <echo>Hello, from Ant</echo>
    </target>
</project>

Output of gradle hello

> gradle hello
:intro
Hello, from Gradle
:hello
[ant:echo] Hello, from Ant

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

Sometimes it may be necessary to “rename” the task generated for an Ant target to avoid a naming collision with existing Gradle tasks. To do this, use the AntBuilder.importBuild(java.lang.Object, org.gradle.api.Transformer) method.

Example 156. Renaming imported Ant targets

build.gradle

ant.importBuild('build.xml') { antTargetName ->
    'a-' + antTargetName
}

build.xml

<project>
    <target name="hello">
        <echo>Hello, from Ant</echo>
    </target>
</project>

Output of gradle a-hello

> gradle a-hello
:a-hello
[ant:echo] Hello, from Ant

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

Note that while the second argument to this method should be a Transformer, when programming in Groovy we can simply use a closure instead of an anonymous inner class (or similar) due to Groovy’s support for automatically coercing closures to single-abstract-method types.

Ant properties and references

There are several ways to set an Ant property, so that the property can be used by Ant tasks. You can set the property directly on the AntBuilder instance. The Ant properties are also available as a Map which you can change. You can also use the Ant property task. Below are some examples of how to do this.

Example 157. Setting an Ant property

build.gradle

ant.buildDir = buildDir
ant.properties.buildDir = buildDir
ant.properties['buildDir'] = buildDir
ant.property(name: 'buildDir', location: buildDir)

build.xml

<echo>buildDir = ${buildDir}</echo>

Many Ant tasks set properties when they execute. There are several ways to get the value of these properties. You can get the property directly from the AntBuilder instance. The Ant properties are also available as a Map. Below are some examples.

Example 158. Getting an Ant property

build.xml

<property name="antProp" value="a property defined in an Ant build"/>

build.gradle

println ant.antProp
println ant.properties.antProp
println ant.properties['antProp']

There are several ways to set an Ant reference:

Example 159. Setting an Ant reference

build.gradle

ant.path(id: 'classpath', location: 'libs')
ant.references.classpath = ant.path(location: 'libs')
ant.references['classpath'] = ant.path(location: 'libs')

build.xml

<path refid="classpath"/>

There are several ways to get an Ant reference:

Example 160. Getting an Ant reference

build.xml

<path id="antPath" location="libs"/>

build.gradle

println ant.references.antPath
println ant.references['antPath']

Ant logging

Gradle maps Ant message priorities to Gradle log levels so that messages logged from Ant appear in the Gradle output. By default, these are mapped as follows:

Table 9. Ant message priority mapping

Ant Message Priority Gradle Log Level

VERBOSE

DEBUG

DEBUG

DEBUG

INFO

INFO

WARN

WARN

ERROR

ERROR

Fine tuning Ant logging

The default mapping of Ant message priority to Gradle log level can sometimes be problematic. For example, there is no message priority that maps directly to the LIFECYCLE log level, which is the default for Gradle. Many Ant tasks log messages at the INFO priority, which means to expose those messages from Gradle, a build would have to be run with the log level set to INFO, potentially logging much more output than is desired.

Conversely, if an Ant task logs messages at too high of a level, to suppress those messages would require the build to be run at a higher log level, such as QUIET. However, this could result in other, desirable output being suppressed.

To help with this, Gradle allows the user to fine tune the Ant logging and control the mapping of message priority to Gradle log level. This is done by setting the priority that should map to the default Gradle LIFECYCLE log level using the AntBuilder.setLifecycleLogLevel(java.lang.String) method. When this value is set, any Ant message logged at the configured priority or above will be logged at least at LIFECYCLE. Any Ant message logged below this priority will be logged at most at INFO.

For example, the following changes the mapping such that Ant INFO priority messages are exposed at the LIFECYCLE log level.

Example 161. Fine tuning Ant logging

build.gradle

ant.lifecycleLogLevel = "INFO"

task hello {
    doLast {
        ant.echo(level: "info", message: "hello from info priority!")
    }
}

Output of gradle hello

> gradle hello
:hello
[ant:echo] hello from info priority!

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

On the other hand, if the lifecycleLogLevel was set to ERROR, Ant messages logged at the WARN priority would no longer be logged at the WARN log level. They would now be logged at the INFO level and would be suppressed by default.

API

The Ant integration is provided by AntBuilder.



[7] In Groovy you can execute Strings. To learn more about executing external processes with Groovy have a look in 'Groovy in Action' 9.3.2 or at the Groovy wiki

Build Lifecycle

We said earlier that the core of Gradle is a language for dependency based programming. In Gradle terms this means that you can define tasks and dependencies between tasks. Gradle guarantees that these tasks are executed in the order of their dependencies, and that each task is executed only once. These tasks form a Directed Acyclic Graph. There are build tools that build up such a dependency graph as they execute their tasks. Gradle builds the complete dependency graph before any task is executed. This lies at the heart of Gradle and makes many things possible which would not be possible otherwise.

Your build scripts configure this dependency graph. Therefore they are strictly speaking build configuration scripts.

Build phases

A Gradle build has three distinct phases.

Initialization

Gradle supports single and multi-project builds. During the initialization phase, Gradle determines which projects are going to take part in the build, and creates a Project instance for each of these projects.

Configuration

During this phase the project objects are configured. The build scripts of all projects which are part of the build are executed. Gradle 1.4 introduced an incubating opt-in feature called configuration on demand. In this mode, Gradle configures only relevant projects (see the section called “Configuration on demand”).

Execution

Gradle determines the subset of the tasks, created and configured during the configuration phase, to be executed. The subset is determined by the task name arguments passed to the gradle command and the current directory. Gradle then executes each of the selected tasks.

Settings file

Beside the build script files, Gradle defines a settings file. The settings file is determined by Gradle via a naming convention. The default name for this file is settings.gradle. Later in this chapter we explain how Gradle looks for a settings file.

The settings file is executed during the initialization phase. A multiproject build must have a settings.gradle file in the root project of the multiproject hierarchy. It is required because the settings file defines which projects are taking part in the multi-project build (see Authoring Multi-Project Builds). For a single-project build, a settings file is optional. Besides defining the included projects, you might need it to add libraries to your build script classpath (see Organizing Build Logic). Let’s first do some introspection with a single project build:

Example 162. Single project build

settings.gradle

println 'This is executed during the initialization phase.'

build.gradle

println 'This is executed during the configuration phase.'

task configured {
    println 'This is also executed during the configuration phase.'
}

task test {
    doLast {
        println 'This is executed during the execution phase.'
    }
}

task testBoth {
    doFirst {
      println 'This is executed first during the execution phase.'
    }
    doLast {
      println 'This is executed last during the execution phase.'
    }
    println 'This is executed during the configuration phase as well.'
}

Output of gradle test testBoth

> gradle test testBoth
This is executed during the initialization phase.
This is executed during the configuration phase.
This is also executed during the configuration phase.
This is executed during the configuration phase as well.
:test
This is executed during the execution phase.
:testBoth
This is executed first during the execution phase.
This is executed last during the execution phase.

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

For a build script, the property access and method calls are delegated to a project object. Similarly property access and method calls within the settings file is delegated to a settings object. Look at the Settings class in the API documentation for more information.

Multi-project builds

A multi-project build is a build where you build more than one project during a single execution of Gradle. You have to declare the projects taking part in the multiproject build in the settings file. There is much more to say about multi-project builds in the chapter dedicated to this topic (see Authoring Multi-Project Builds).

Project locations

Multi-project builds are always represented by a tree with a single root. Each element in the tree represents a project. A project has a path which denotes the position of the project in the multi-project build tree. In most cases the project path is consistent with the physical location of the project in the file system. However, this behavior is configurable. The project tree is created in the settings.gradle file. By default it is assumed that the location of the settings file is also the location of the root project. But you can redefine the location of the root project in the settings file.

Building the tree

In the settings file you can use a set of methods to build the project tree. Hierarchical and flat physical layouts get special support.

Hierarchical layouts

Example 163. Hierarchical layout

settings.gradle

include 'project1', 'project2:child', 'project3:child1'

The include method takes project paths as arguments. The project path is assumed to be equal to the relative physical file system path. For example, a path 'services:api' is mapped by default to a folder 'services/api' (relative from the project root). You only need to specify the leaves of the tree. This means that the inclusion of the path 'services:hotels:api' will result in creating 3 projects: 'services', 'services:hotels' and 'services:hotels:api'. More examples of how to work with the project path can be found in the DSL documentation of Settings.include(java.lang.String[]).

Flat layouts

Example 164. Flat layout

settings.gradle

includeFlat 'project3', 'project4'

The includeFlat method takes directory names as an argument. These directories need to exist as siblings of the root project directory. The location of these directories are considered as child projects of the root project in the multi-project tree.

Modifying elements of the project tree

The multi-project tree created in the settings file is made up of so called project descriptors. You can modify these descriptors in the settings file at any time. To access a descriptor you can do:

Example 165. Lookup of elements of the project tree

settings.gradle

println rootProject.name
println project(':projectA').name

Using this descriptor you can change the name, project directory and build file of a project.

Example 166. Modification of elements of the project tree

settings.gradle

rootProject.name = 'main'
project(':projectA').projectDir = new File(settingsDir, '../my-project-a')
project(':projectA').buildFileName = 'projectA.gradle'

Look at the ProjectDescriptor class in the API documentation for more information.

Initialization

How does Gradle know whether to do a single or multiproject build? If you trigger a multiproject build from a directory with a settings file, things are easy. But Gradle also allows you to execute the build from within any subproject taking part in the build.[8] If you execute Gradle from within a project with no settings.gradle file, Gradle looks for a settings.gradle file in the following way:

  • It looks in a directory called master which has the same nesting level as the current dir.

  • If not found yet, it searches parent directories.

  • If not found yet, the build is executed as a single project build.

  • If a settings.gradle file is found, Gradle checks if the current project is part of the multiproject hierarchy defined in the found settings.gradle file. If not, the build is executed as a single project build. Otherwise a multiproject build is executed.

What is the purpose of this behavior? Gradle needs to determine whether the project you are in is a subproject of a multiproject build or not. Of course, if it is a subproject, only the subproject and its dependent projects are built, but Gradle needs to create the build configuration for the whole multiproject build (see Authoring Multi-Project Builds). You can use the -u command line option to tell Gradle not to look in the parent hierarchy for a settings.gradle file. The current project is then always built as a single project build. If the current project contains a settings.gradle file, the -u option has no meaning. Such a build is always executed as:

  • a single project build, if the settings.gradle file does not define a multiproject hierarchy

  • a multiproject build, if the settings.gradle file does define a multiproject hierarchy.

The automatic search for a settings.gradle file only works for multi-project builds with a physical hierarchical or flat layout. For a flat layout you must additionally follow the naming convention described above (“master”). Gradle supports arbitrary physical layouts for a multiproject build, but for such arbitrary layouts you need to execute the build from the directory where the settings file is located. For information on how to run partial builds from the root see the section called “Running tasks by their absolute path”.

Gradle creates a Project object for every project taking part in the build. For a multi-project build these are the projects specified in the Settings object (plus the root project). Each project object has by default a name equal to the name of its top level directory, and every project except the root project has a parent project. Any project may have child projects.

Configuration and execution of a single project build

For a single project build, the workflow of the after initialization phases are pretty simple. The build script is executed against the project object that was created during the initialization phase. Then Gradle looks for tasks with names equal to those passed as command line arguments. If these task names exist, they are executed as a separate build in the order you have passed them. The configuration and execution for multi-project builds is discussed in Authoring Multi-Project Builds.

Responding to the lifecycle in the build script

Your build script can receive notifications as the build progresses through its lifecycle. These notifications generally take two forms: You can either implement a particular listener interface, or you can provide a closure to execute when the notification is fired. The examples below use closures. For details on how to use the listener interfaces, refer to the API documentation.

Project evaluation

You can receive a notification immediately before and after a project is evaluated. This can be used to do things like performing additional configuration once all the definitions in a build script have been applied, or for some custom logging or profiling.

Below is an example which adds a test task to each project which has a hasTests property value of true.

Example 167. Adding of test task to each project which has certain property set

build.gradle

allprojects {
    afterEvaluate { project ->
        if (project.hasTests) {
            println "Adding test task to $project"
            project.task('test') {
                doLast {
                    println "Running tests for $project"
                }
            }
        }
    }
}

projectA.gradle

hasTests = true

Output of gradle -q test

> gradle -q test
Adding test task to project ':projectA'
Running tests for project ':projectA'

This example uses method Project.afterEvaluate() to add a closure which is executed after the project is evaluated.

It is also possible to receive notifications when any project is evaluated. This example performs some custom logging of project evaluation. Notice that the afterProject notification is received regardless of whether the project evaluates successfully or fails with an exception.

Example 168. Notifications

build.gradle

gradle.afterProject {project, projectState ->
    if (projectState.failure) {
        println "Evaluation of $project FAILED"
    } else {
        println "Evaluation of $project succeeded"
    }
}

Output of gradle -q test

> gradle -q test
Evaluation of root project 'buildProjectEvaluateEvents' succeeded
Evaluation of project ':projectA' succeeded
Evaluation of project ':projectB' FAILED

You can also add a ProjectEvaluationListener to the Gradle to receive these events.

Task creation

You can receive a notification immediately after a task is added to a project. This can be used to set some default values or add behaviour before the task is made available in the build file.

The following example sets the srcDir property of each task as it is created.

Example 169. Setting of certain property to all tasks

build.gradle

tasks.whenTaskAdded { task ->
    task.ext.srcDir = 'src/main/java'
}

task a

println "source dir is $a.srcDir"

Output of gradle -q a

> gradle -q a
source dir is src/main/java

You can also add an Action to a TaskContainer to receive these events.

Task execution graph ready

You can receive a notification immediately after the task execution graph has been populated. We have seen this already in the section called “Configure by DAG”.

You can also add a TaskExecutionGraphListener to the TaskExecutionGraph to receive these events.

Task execution

You can receive a notification immediately before and after any task is executed.

The following example logs the start and end of each task execution. Notice that the afterTask notification is received regardless of whether the task completes successfully or fails with an exception.

Example 170. Logging of start and end of each task execution

build.gradle

task ok

task broken(dependsOn: ok) {
    doLast {
        throw new RuntimeException('broken')
    }
}

gradle.taskGraph.beforeTask { Task task ->
    println "executing $task ..."
}

gradle.taskGraph.afterTask { Task task, TaskState state ->
    if (state.failure) {
        println "FAILED"
    }
    else {
        println "done"
    }
}

Output of gradle -q broken

> gradle -q broken
executing task ':ok' ...
done
executing task ':broken' ...
FAILED

You can also use a TaskExecutionListener to the TaskExecutionGraph to receive these events.



[8] Gradle supports partial multiproject builds (see Authoring Multi-Project Builds).

Logging

The log is the main 'UI' of a build tool. If it is too verbose, real warnings and problems are easily hidden by this. On the other hand you need relevant information for figuring out if things have gone wrong. Gradle defines 6 log levels, as shown in Table 10. There are two Gradle-specific log levels, in addition to the ones you might normally see. Those levels are QUIET and LIFECYCLE. The latter is the default, and is used to report build progress.

Table 10. Log levels

Level Used for

ERROR

Error messages

QUIET

Important information messages

WARNING

Warning messages

LIFECYCLE

Progress information messages

INFO

Information messages

DEBUG

Debug messages

The rich components of the console (build status and work in progress area) are displayed regardless of the log level used. Before Gradle 4.0 those rich components were only displayed at log level LIFECYCLE or below.

Choosing a log level

You can use the command line switches shown in Table 11 to choose different log levels. You can also configure the log level using gradle.properties, see the section called “Gradle properties”. In Table 12 you find the command line switches which affect stacktrace logging.

Table 11. Log level command-line options

Option Outputs Log Levels

no logging options

LIFECYCLE and higher

-q or --quiet

QUIET and higher

-w or --warn

WARN and higher

-i or --info

INFO and higher

-d or --debug

DEBUG and higher (that is, all log messages)

Table 12. Stacktrace command-line options

Option Meaning

No stacktrace options

No stacktraces are printed to the console in case of a build error (e.g. a compile error). Only in case of internal exceptions will stacktraces be printed. If the DEBUG log level is chosen, truncated stacktraces are always printed.

-s or --stacktrace

Truncated stacktraces are printed. We recommend this over full stacktraces. Groovy full stacktraces are extremely verbose (Due to the underlying dynamic invocation mechanisms. Yet they usually do not contain relevant information for what has gone wrong in your code.) This option renders stacktraces for deprecation warnings.

-S or --full-stacktrace

The full stacktraces are printed out. This option renders stacktraces for deprecation warnings.

Writing your own log messages

A simple option for logging in your build file is to write messages to standard output. Gradle redirects anything written to standard output to its logging system at the QUIET log level.

Example 171. Using stdout to write log messages

build.gradle

println 'A message which is logged at QUIET level'

Gradle also provides a logger property to a build script, which is an instance of Logger. This interface extends the SLF4J Logger interface and adds a few Gradle specific methods to it. Below is an example of how this is used in the build script:

Example 172. Writing your own log messages

build.gradle

logger.quiet('An info log message which is always logged.')
logger.error('An error log message.')
logger.warn('A warning log message.')
logger.lifecycle('A lifecycle info log message.')
logger.info('An info log message.')
logger.debug('A debug log message.')
logger.trace('A trace log message.')

Use the typical SLF4J pattern to replace a placeholder with an actual value as part of the log message.

Example 173. Writing a log message with placeholder

build.gradle

logger.info('A {} log message', 'info')

You can also hook into Gradle’s logging system from within other classes used in the build (classes from the buildSrc directory for example). Simply use an SLF4J logger. You can use this logger the same way as you use the provided logger in the build script.

Example 174. Using SLF4J to write log messages

build.gradle

import org.slf4j.Logger
import org.slf4j.LoggerFactory

Logger slf4jLogger = LoggerFactory.getLogger('some-logger')
slf4jLogger.info('An info log message logged using SLF4j')

Logging from external tools and libraries

Internally, Gradle uses Ant and Ivy. Both have their own logging system. Gradle redirects their logging output into the Gradle logging system. There is a 1:1 mapping from the Ant/Ivy log levels to the Gradle log levels, except the Ant/Ivy TRACE log level, which is mapped to Gradle DEBUG log level. This means the default Gradle log level will not show any Ant/Ivy output unless it is an error or a warning.

There are many tools out there which still use standard output for logging. By default, Gradle redirects standard output to the QUIET log level and standard error to the ERROR level. This behavior is configurable. The project object provides a LoggingManager, which allows you to change the log levels that standard out or error are redirected to when your build script is evaluated.

Example 175. Configuring standard output capture

build.gradle

logging.captureStandardOutput LogLevel.INFO
println 'A message which is logged at INFO level'

To change the log level for standard out or error during task execution, tasks also provide a LoggingManager.

Example 176. Configuring standard output capture for a task

build.gradle

task logInfo {
    logging.captureStandardOutput LogLevel.INFO
    doFirst {
        println 'A task message which is logged at INFO level'
    }
}

Gradle also provides integration with the Java Util Logging, Jakarta Commons Logging and Log4j logging toolkits. Any log messages which your build classes write using these logging toolkits will be redirected to Gradle’s logging system.

Changing what Gradle logs

You can replace much of Gradle’s logging UI with your own. You might do this, for example, if you want to customize the UI in some way - to log more or less information, or to change the formatting. You replace the logging using the Gradle.useLogger(java.lang.Object) method. This is accessible from a build script, or an init script, or via the embedding API. Note that this completely disables Gradle’s default output. Below is an example init script which changes how task execution and build completion is logged.

Example 177. Customizing what Gradle logs

init.gradle

useLogger(new CustomEventLogger())

class CustomEventLogger extends BuildAdapter implements TaskExecutionListener {

    public void beforeExecute(Task task) {
        println "[$task.name]"
    }

    public void afterExecute(Task task, TaskState state) {
        println()
    }
    
    public void buildFinished(BuildResult result) {
        println 'build completed'
        if (result.failure != null) {
            result.failure.printStackTrace()
        }
    }
}

Output of gradle -I init.gradle build

> gradle -I init.gradle build
[compile]
compiling source

[testCompile]
compiling test source

[test]
running unit tests

[build]

build completed
3 actionable tasks: 3 executed

Your logger can implement any of the listener interfaces listed below. When you register a logger, only the logging for the interfaces that it implements is replaced. Logging for the other interfaces is left untouched. You can find out more about the listener interfaces in the section called “Responding to the lifecycle in the build script”.

Authoring Multi-Project Builds

The powerful support for multi-project builds is one of Gradle’s unique selling points. This topic is also the most intellectually challenging.

A multi-project build in gradle consists of one root project, and one or more subprojects that may also have subprojects.

Cross project configuration

While each subproject could configure itself in complete isolation of the other subprojects, it is common that subprojects share common traits. It is then usually preferable to share configurations among projects, so the same configuration affects several subprojects.

Let’s start with a very simple multi-project build. Gradle is a general purpose build tool at its core, so the projects don’t have to be Java projects. Our first examples are about marine life.

Configuration and execution

the section called “Build phases” describes the phases of every Gradle build. Let’s zoom into the configuration and execution phases of a multi-project build. Configuration here means executing the build.gradle file of a project, which implies e.g. downloading all plugins that were declared using ‘apply plugin’. By default, the configuration of all projects happens before any task is executed. This means that when a single task, from a single project is requested, all projects of multi-project build are configured first. The reason every project needs to be configured is to support the flexibility of accessing and changing any part of the Gradle project model.

Configuration on demand

The Configuration injection feature and access to the complete project model are possible because every project is configured before the execution phase. Yet, this approach may not be the most efficient in a very large multi-project build. There are Gradle builds with a hierarchy of hundreds of subprojects. The configuration time of huge multi-project builds may become noticeable. Scalability is an important requirement for Gradle. Hence, starting from version 1.4 a new incubating 'configuration on demand' mode is introduced.

Configuration on demand mode attempts to configure only projects that are relevant for requested tasks, i.e. it only executes the build.gradle file of projects that are participating in the build. This way, the configuration time of a large multi-project build can be reduced. In the long term, this mode will become the default mode, possibly the only mode for Gradle build execution. The configuration on demand feature is incubating so not every build is guaranteed to work correctly. The feature should work very well for multi-project builds that have decoupled projects (the section called “Decoupled Projects”). In “configuration on demand” mode, projects are configured as follows:

  • The root project is always configured. This way the typical common configuration is supported (allprojects or subprojects script blocks).

  • The project in the directory where the build is executed is also configured, but only when Gradle is executed without any tasks. This way the default tasks behave correctly when projects are configured on demand.

  • The standard project dependencies are supported and makes relevant projects configured. If project A has a compile dependency on project B then building A causes configuration of both projects.

  • The task dependencies declared via task path are supported and cause relevant projects to be configured. Example: someTask.dependsOn(":someOtherProject:someOtherTask")

  • A task requested via task path from the command line (or Tooling API) causes the relevant project to be configured. For example, building 'projectA:projectB:someTask' causes configuration of projectB.

Eager to try out this new feature? To configure on demand with every build run see the section called “Gradle properties”. To configure on demand just for a given build, see the section called “Performance options”.

Defining common behavior

Let’s look at some examples with the following project tree. This is a multi-project build with a root project named water and a subproject named bluewhale.

Example 178. Multi-project tree - water & bluewhale projects

Build layout

water/
  build.gradle
  settings.gradle
  bluewhale/

Note: The code for this example can be found at samples/userguide/multiproject/firstExample/water in the ‘-all’ distribution of Gradle.

settings.gradle

include 'bluewhale'

And where is the build script for the bluewhale project? In Gradle build scripts are optional. Obviously for a single project build, a project without a build script doesn’t make much sense. For multiproject builds the situation is different. Let’s look at the build script for the water project and execute it:

Example 179. Build script of water (parent) project

build.gradle

Closure cl = { task -> println "I'm $task.project.name" }
task('hello').doLast(cl)
project(':bluewhale') {
    task('hello').doLast(cl)
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale

Gradle allows you to access any project of the multi-project build from any build script. The Project API provides a method called project(), which takes a path as an argument and returns the Project object for this path. The capability to configure a project build from any build script we call cross project configuration. Gradle implements this via configuration injection.

We are not that happy with the build script of the water project. It is inconvenient to add the task explicitly for every project. We can do better. Let’s first add another project called krill to our multi-project build.

Example 180. Multi-project tree - water, bluewhale & krill projects

Build layout

water/
  build.gradle
  settings.gradle
  bluewhale/
  krill/

Note: The code for this example can be found at samples/userguide/multiproject/addKrill/water in the ‘-all’ distribution of Gradle.

settings.gradle

include 'bluewhale', 'krill'

Now we rewrite the water build script and boil it down to a single line.

Example 181. Water project build script

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
I'm krill

Is this cool or is this cool? And how does this work? The Project API provides a property allprojects which returns a list with the current project and all its subprojects underneath it. If you call allprojects with a closure, the statements of the closure are delegated to the projects associated with allprojects. You could also do an iteration via allprojects.each, but that would be more verbose.

Other build systems use inheritance as the primary means for defining common behavior. We also offer inheritance for projects as you will see later. But Gradle uses configuration injection as the usual way of defining common behavior. We think it provides a very powerful and flexible way of configuring multiproject builds.

Another possibility for sharing configuration is to use a common external script. See the section called “Configuring the project using an external build script” for more information.

Subproject configuration

The Project API also provides a property for accessing the subprojects only.

Defining common behavior

Example 182. Defining common behavior of all projects and subprojects

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}
subprojects {
    hello {
        doLast {
            println "- I depend on water"
        }
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
- I depend on water
I'm krill
- I depend on water

You may notice that there are two code snippets referencing the “hello” task. The first one, which uses the “task” keyword, constructs the task and provides it’s base configuration. The second piece doesn’t use the “task” keyword, as it is further configuring the existing “hello” task. You may only construct a task once in a project, but you may add any number of code blocks providing additional configuration.

Adding specific behavior

You can add specific behavior on top of the common behavior. Usually we put the project specific behavior in the build script of the project where we want to apply this specific behavior. But as we have already seen, we don’t have to do it this way. We could add project specific behavior for the bluewhale project like this:

Example 183. Defining specific behaviour for particular project

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}
subprojects {
    hello {
        doLast {
            println "- I depend on water"
        }
    }
}
project(':bluewhale').hello {
    doLast {
        println "- I'm the largest animal that has ever lived on this planet."
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
- I depend on water
- I'm the largest animal that has ever lived on this planet.
I'm krill
- I depend on water

As we have said, we usually prefer to put project specific behavior into the build script of this project. Let’s refactor and also add some project specific behavior to the krill project.

Example 184. Defining specific behaviour for project krill

Build layout

water/
  build.gradle
  settings.gradle
  bluewhale/
    build.gradle
  krill/
    build.gradle

Note: The code for this example can be found at samples/userguide/multiproject/spreadSpecifics/water in the ‘-all’ distribution of Gradle.

settings.gradle

include 'bluewhale', 'krill'

bluewhale/build.gradle

hello.doLast {
  println "- I'm the largest animal that has ever lived on this planet."
}

krill/build.gradle

hello.doLast {
  println "- The weight of my species in summer is twice as heavy as all human beings."
}

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}
subprojects {
    hello {
        doLast {
            println "- I depend on water"
        }
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
- I depend on water
- I'm the largest animal that has ever lived on this planet.
I'm krill
- I depend on water
- The weight of my species in summer is twice as heavy as all human beings.

Project filtering

To show more of the power of configuration injection, let’s add another project called tropicalFish and add more behavior to the build via the build script of the water project.

Filtering by name

Example 185. Adding custom behaviour to some projects (filtered by project name)

Build layout

water/
  build.gradle
  settings.gradle
  bluewhale/
    build.gradle
  krill/
    build.gradle
  tropicalFish/

Note: The code for this example can be found at samples/userguide/multiproject/addTropical/water in the ‘-all’ distribution of Gradle.

settings.gradle

include 'bluewhale', 'krill', 'tropicalFish'

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}
subprojects {
    hello {
        doLast {
            println "- I depend on water"
        }
    }
}
configure(subprojects.findAll {it.name != 'tropicalFish'}) {
    hello {
        doLast {
            println '- I love to spend time in the arctic waters.'
        }
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
- I depend on water
- I love to spend time in the arctic waters.
- I'm the largest animal that has ever lived on this planet.
I'm krill
- I depend on water
- I love to spend time in the arctic waters.
- The weight of my species in summer is twice as heavy as all human beings.
I'm tropicalFish
- I depend on water

The configure() method takes a list as an argument and applies the configuration to the projects in this list.

Filtering by properties

Using the project name for filtering is one option. Using extra project properties is another. (See the section called “Extra properties” for more information on extra properties.)

Example 186. Adding custom behaviour to some projects (filtered by project properties)

Build layout

water/
  build.gradle
  settings.gradle
  bluewhale/
    build.gradle
  krill/
    build.gradle
  tropicalFish/
    build.gradle

Note: The code for this example can be found at samples/userguide/multiproject/tropicalWithProperties/water in the ‘-all’ distribution of Gradle.

settings.gradle

include 'bluewhale', 'krill', 'tropicalFish'

bluewhale/build.gradle

ext.arctic = true
hello.doLast {
  println "- I'm the largest animal that has ever lived on this planet."
}

krill/build.gradle

ext.arctic = true
hello.doLast {
    println "- The weight of my species in summer is twice as heavy as all human beings."
}

tropicalFish/build.gradle

ext.arctic = false

build.gradle

allprojects {
    task hello {
        doLast { task ->
            println "I'm $task.project.name"
        }
    }
}
subprojects {
    hello {
        doLast {println "- I depend on water"}
        afterEvaluate { Project project ->
            if (project.arctic) { doLast {
                println '- I love to spend time in the arctic waters.' }
            }
        }
    }
}

Output of gradle -q hello

> gradle -q hello
I'm water
I'm bluewhale
- I depend on water
- I'm the largest animal that has ever lived on this planet.
- I love to spend time in the arctic waters.
I'm krill
- I depend on water
- The weight of my species in summer is twice as heavy as all human beings.
- I love to spend time in the arctic waters.
I'm tropicalFish
- I depend on water

In the build file of the water project we use an afterEvaluate notification. This means that the closure we are passing gets evaluated after the build scripts of the subproject are evaluated. As the property arctic is set in those build scripts, we have to do it this way. You will find more on this topic in the section called “Dependencies - Which dependencies?”

Execution rules for multi-project builds

When we executed the hello task from the root project dir, things behaved in an intuitive way. All the hello tasks of the different projects were executed. Let’s switch to the bluewhale dir and see what happens if we execute Gradle from there.

Example 187. Running build from subproject

Output of gradle -q hello

> gradle -q hello
I'm bluewhale
- I depend on water
- I'm the largest animal that has ever lived on this planet.
- I love to spend time in the arctic waters.

The basic rule behind Gradle’s behavior is simple. Gradle looks down the hierarchy, starting with the current dir, for tasks with the name hello and executes them. One thing is very important to note. Gradle always evaluates every project of the multi-project build and creates all existing task objects. Then, according to the task name arguments and the current dir, Gradle filters the tasks which should be executed. Because of Gradle’s cross project configuration every project has to be evaluated before any task gets executed. We will have a closer look at this in the next section. Let’s now have our last marine example. Let’s add a task to bluewhale and krill.

Example 188. Evaluation and execution of projects

bluewhale/build.gradle

ext.arctic = true
hello {
    doLast {
        println "- I'm the largest animal that has ever lived on this planet."
    }
}

task distanceToIceberg {
    doLast {
        println '20 nautical miles'
    }
}

krill/build.gradle

ext.arctic = true
hello {
    doLast {
        println "- The weight of my species in summer is twice as heavy as all human beings."
    }
}

task distanceToIceberg {
    doLast {
        println '5 nautical miles'
    }
}

Output of gradle -q distanceToIceberg

> gradle -q distanceToIceberg
20 nautical miles
5 nautical miles

Here’s the output without the -q option:

Example 189. Evaluation and execution of projects

Output of gradle distanceToIceberg

> gradle distanceToIceberg
:bluewhale:distanceToIceberg
20 nautical miles
:krill:distanceToIceberg
5 nautical miles

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

The build is executed from the water project. Neither water nor tropicalFish have a task with the name distanceToIceberg. Gradle does not care. The simple rule mentioned already above is: Execute all tasks down the hierarchy which have this name. Only complain if there is no such task!

Running tasks by their absolute path

As we have seen, you can run a multi-project build by entering any subproject dir and execute the build from there. All matching task names of the project hierarchy starting with the current dir are executed. But Gradle also offers to execute tasks by their absolute path (see also the section called “Project and task paths”):

Example 190. Running tasks by their absolute path

Output of gradle -q :hello :krill:hello hello

> gradle -q :hello :krill:hello hello
I'm water
I'm krill
- I depend on water
- The weight of my species in summer is twice as heavy as all human beings.
- I love to spend time in the arctic waters.
I'm tropicalFish
- I depend on water

The build is executed from the tropicalFish project. We execute the hello tasks of the water, the krill and the tropicalFish project. The first two tasks are specified by their absolute path, the last task is executed using the name matching mechanism described above.

Project and task paths

A project path has the following pattern: It starts with an optional colon, which denotes the root project. The root project is the only project in a path that is not specified by its name. The rest of a project path is a colon-separated sequence of project names, where the next project is a subproject of the previous project.

The path of a task is simply its project path plus the task name, like “:bluewhale:hello”. Within a project you can address a task of the same project just by its name. This is interpreted as a relative path.

Dependencies - Which dependencies?

The examples from the last section were special, as the projects had no Execution Dependencies. They had only Configuration Dependencies. The following sections illustrate the differences between these two types of dependencies.

Execution dependencies

Dependencies and execution order

Example 191. Dependencies and execution order

Build layout

messages/
  build.gradle
  settings.gradle
  consumer/
    build.gradle
  producer/
    build.gradle

Note: The code for this example can be found at samples/userguide/multiproject/dependencies/firstMessages/messages in the ‘-all’ distribution of Gradle.

build.gradle

ext.producerMessage = null

settings.gradle

include 'consumer', 'producer'

consumer/build.gradle

task action {
    doLast {
        println("Consuming message: ${rootProject.producerMessage}")
    }
}

producer/build.gradle

task action {
    doLast {
        println "Producing message:"
        rootProject.producerMessage = 'Watch the order of execution.'
    }
}

Output of gradle -q action

> gradle -q action
Consuming message: null
Producing message:

This didn’t quite do what we want. If nothing else is defined, Gradle executes the task in alphanumeric order. Therefore, Gradle will execute “:consumer:action” before “:producer:action”. Let’s try to solve this with a hack and rename the producer project to “aProducer”.

Example 192. Dependencies and execution order

Build layout

messages/
  build.gradle
  settings.gradle
  aProducer/
    build.gradle
  consumer/
    build.gradle

build.gradle

ext.producerMessage = null

settings.gradle

include 'consumer', 'aProducer'

aProducer/build.gradle

task action {
    doLast {
        println "Producing message:"
        rootProject.producerMessage = 'Watch the order of execution.'
    }
}

consumer/build.gradle

task action {
    doLast {
        println("Consuming message: ${rootProject.producerMessage}")
    }
}

Output of gradle -q action

> gradle -q action
Producing message:
Consuming message: Watch the order of execution.

We can show where this hack doesn’t work if we now switch to the consumer dir and execute the build.

Example 193. Dependencies and execution order

Output of gradle -q action

> gradle -q action
Consuming message: null

The problem is that the two “action” tasks are unrelated. If you execute the build from the “messages” project Gradle executes them both because they have the same name and they are down the hierarchy. In the last example only one “action” task was down the hierarchy and therefore it was the only task that was executed. We need something better than this hack.

Declaring dependencies

Example 194. Declaring dependencies

Build layout

messages/
  build.gradle
  settings.gradle
  consumer/
    build.gradle
  producer/
    build.gradle

Note: The code for this example can be found at samples/userguide/multiproject/dependencies/messagesWithDependencies/messages in the ‘-all’ distribution of Gradle.

build.gradle

ext.producerMessage = null

settings.gradle

include 'consumer', 'producer'

consumer/build.gradle

task action(dependsOn: ":producer:action") {
    doLast {
        println("Consuming message: ${rootProject.producerMessage}")
    }
}

producer/build.gradle

task action {
    doLast {
        println "Producing message:"
        rootProject.producerMessage = 'Watch the order of execution.'
    }
}

Output of gradle -q action

> gradle -q action
Producing message:
Consuming message: Watch the order of execution.

Running this from the consumer directory gives:

Example 195. Declaring dependencies

Output of gradle -q action

> gradle -q action
Producing message:
Consuming message: Watch the order of execution.

This is now working better because we have declared that the “action” task in the “consumer” project has an execution dependency on the “action” task in the “producer” project.

The nature of cross project task dependencies

Of course, task dependencies across different projects are not limited to tasks with the same name. Let’s change the naming of our tasks and execute the build.

Example 196. Cross project task dependencies

consumer/build.gradle

task consume(dependsOn: ':producer:produce') {
    doLast {
        println("Consuming message: ${rootProject.producerMessage}")
    }
}

producer/build.gradle

task produce {
    doLast {
        println "Producing message:"
        rootProject.producerMessage = 'Watch the order of execution.'
    }
}

Output of gradle -q consume

> gradle -q consume
Producing message:
Consuming message: Watch the order of execution.

Configuration time dependencies

Let’s see one more example with our producer-consumer build before we enter Java land. We add a property to the “producer” project and create a configuration time dependency from “consumer” to “producer”.

Example 197. Configuration time dependencies

consumer/build.gradle

def message = rootProject.producerMessage

task consume {
    doLast {
        println("Consuming message: " + message)
    }
}

producer/build.gradle

rootProject.producerMessage = 'Watch the order of evaluation.'

Output of gradle -q consume

> gradle -q consume
Consuming message: null

The default evaluation order of projects is alphanumeric (for the same nesting level). Therefore the “consumer” project is evaluated before the “producer” project and the “producerMessage” value is set after it is read by the “consumer” project. Gradle offers a solution for this.

Example 198. Configuration time dependencies - evaluationDependsOn

consumer/build.gradle

evaluationDependsOn(':producer')

def message = rootProject.producerMessage

task consume {
    doLast {
        println("Consuming message: " + message)
    }
}

Output of gradle -q consume

> gradle -q consume
Consuming message: Watch the order of evaluation.

The use of the “evaluationDependsOn” command results in the evaluation of the “producer” project before the “consumer” project is evaluated. This example is a bit contrived to show the mechanism. In this case there would be an easier solution by reading the key property at execution time.

Example 199. Configuration time dependencies

consumer/build.gradle

task consume {
    doLast {
        println("Consuming message: ${rootProject.producerMessage}")
    }
}

Output of gradle -q consume

> gradle -q consume
Consuming message: Watch the order of evaluation.

Configuration dependencies are very different from execution dependencies. Configuration dependencies are between projects whereas execution dependencies are always resolved to task dependencies. Also note that all projects are always configured, even when you start the build from a subproject. The default configuration order is top down, which is usually what is needed.

To change the default configuration order to “bottom up”, use the “evaluationDependsOnChildren()” method instead.

On the same nesting level the configuration order depends on the alphanumeric position. The most common use case is to have multi-project builds that share a common lifecycle (e.g. all projects use the Java plugin). If you declare with dependsOn an execution dependency between different projects, the default behavior of this method is to also create a configuration dependency between the two projects. Therefore it is likely that you don’t have to define configuration dependencies explicitly.

Real life examples

Gradle’s multi-project features are driven by real life use cases. One good example consists of two web application projects and a parent project that creates a distribution including the two web applications.[9] For the example we use only one build script and do cross project configuration.

Example 200. Dependencies - real life example - crossproject configuration

Build layout

webDist/
  settings.gradle
  build.gradle
  date/
    src/main/java/
      org/gradle/sample/
        DateServlet.java
  hello/
    src/main/java/
      org/gradle/sample/
        HelloServlet.java

Note: The code for this example can be found at samples/userguide/multiproject/dependencies/webDist in the ‘-all’ distribution of Gradle.

settings.gradle

include 'date', 'hello'

build.gradle

allprojects {
    apply plugin: 'java'
    group = 'org.gradle.sample'
    version = '1.0'
}

subprojects {
    apply plugin: 'war'
    repositories {
        mavenCentral()
    }
    dependencies {
        compile "javax.servlet:servlet-api:2.5"
    }
}

task explodedDist(type: Copy) {
    into "$buildDir/explodedDist"
    subprojects {
        from tasks.withType(War)
    }
}

We have an interesting set of dependencies. Obviously the date and hello projects have a configuration dependency on webDist, as all the build logic for the webapp projects is injected by webDist. The execution dependency is in the other direction, as webDist depends on the build artifacts of date and hello. There is even a third dependency. webDist has a configuration dependency on date and hello because it needs to know the archivePath. But it asks for this information at execution time. Therefore we have no circular dependency.

Such dependency patterns are daily bread in the problem space of multi-project builds. If a build system does not support these patterns, you either can’t solve your problem or you need to do ugly hacks which are hard to maintain and massively impair your productivity as a build master.

Project lib dependencies

What if one project needs the jar produced by another project in its compile path, and not just the jar but also the transitive dependencies of this jar? Obviously this is a very common use case for Java multi-project builds. As already mentioned in the section called “Project dependencies”, Gradle offers project lib dependencies for this.

Example 201. Project lib dependencies

Build layout

java/
  settings.gradle
  build.gradle
  api/
    src/main/java/
      org/gradle/sample/
        api/
          Person.java
        apiImpl/
          PersonImpl.java
  services/personService/
    src/
      main/java/
        org/gradle/sample/services/
          PersonService.java
      test/java/
        org/gradle/sample/services/
          PersonServiceTest.java
  shared/
    src/main/java/
      org/gradle/sample/shared/
        Helper.java

Note: The code for this example can be found at samples/userguide/multiproject/dependencies/java in the ‘-all’ distribution of Gradle.


We have the projects “shared”, “api” and “personService”. The “personService” project has a lib dependency on the other two projects. The “api” project has a lib dependency on the “shared” project. “services” is also a project, but we use it just as a container. It has no build script and gets nothing injected by another build script. We use the : separator to define a project path. Consult the DSL documentation of Settings.include(java.lang.String[]) for more information about defining project paths.

Example 202. Project lib dependencies

settings.gradle

include 'api', 'shared', 'services:personService'

build.gradle

subprojects {
    apply plugin: 'java'
    group = 'org.gradle.sample'
    version = '1.0'
    repositories {
        mavenCentral()
    }
    dependencies {
        testCompile "junit:junit:4.12"
    }
}

project(':api') {
    dependencies {
        compile project(':shared')
    }
}

project(':services:personService') {
    dependencies {
        compile project(':shared'), project(':api')
    }
}


All the build logic is in the “build.gradle” file of the root project.[10] A “lib” dependency is a special form of an execution dependency. It causes the other project to be built first and adds the jar with the classes of the other project to the classpath. It also adds the dependencies of the other project to the classpath. So you can enter the “api” directory and trigger a “gradle compile”. First the “shared” project is built and then the “api” project is built. Project dependencies enable partial multi-project builds.

If you come from Maven land you might be perfectly happy with this. If you come from Ivy land, you might expect some more fine grained control. Gradle offers this to you:

Example 203. Fine grained control over dependencies

build.gradle

subprojects {
    apply plugin: 'java'
    group = 'org.gradle.sample'
    version = '1.0'
}

project(':api') {
    configurations {
        spi
    }
    dependencies {
        compile project(':shared')
    }
    task spiJar(type: Jar) {
        baseName = 'api-spi'
        from sourceSets.main.output
        include('org/gradle/sample/api/**')
    }
    artifacts {
        spi spiJar
    }
}

project(':services:personService') {
    dependencies {
        compile project(':shared')
        compile project(path: ':api', configuration: 'spi')
        testCompile "junit:junit:4.12", project(':api')
    }
}

The Java plugin adds per default a jar to your project libraries which contains all the classes. In this example we create an additional library containing only the interfaces of the “api” project. We assign this library to a new dependency configuration. For the person service we declare that the project should be compiled only against the “api” interfaces but tested with all classes from “api”.

Disabling the build of dependency projects

Sometimes you don’t want depended on projects to be built when doing a partial build. To disable the build of the depended on projects you can run Gradle with the -a option.

Parallel project execution

With more and more CPU cores available on developer desktops and CI servers, it is important that Gradle is able to fully utilise these processing resources. More specifically, parallel execution attempts to:

  • Reduce total build time for a multi-project build where execution is IO bound or otherwise does not consume all available CPU resources.

  • Provide faster feedback for execution of small projects without awaiting completion of other projects.

Although Gradle already offers parallel test execution via Test.setMaxParallelForks(int) the feature described in this section is parallel execution at a project level. Parallel execution is an incubating feature. Please use it and let us know how it works for you.

Parallel project execution allows the separate projects in a decoupled multi-project build to be executed in parallel (see also: the section called “Decoupled Projects”). While parallel execution does not strictly require decoupling at configuration time, the long-term goal is to provide a powerful set of features that will be available for fully decoupled projects. Such features include:

How does parallel execution work? First, you need to tell Gradle to use parallel mode. You can use the --parallel command line argument or configure your build environment (the section called “Gradle properties”). Unless you provide a specific number of parallel threads, Gradle attempts to choose the right number based on available CPU cores. Every parallel worker exclusively owns a given project while executing a task. Task dependencies are fully supported and parallel workers will start executing upstream tasks first. Bear in mind that the alphabetical ordering of decoupled tasks, as can be seen during sequential execution, is not guaranteed in parallel mode. In other words, in parallel mode tasks will run as soon as their dependencies complete and a task worker is available to run them, which may be earlier than they would start during a sequential build. You should make sure that task dependencies and task inputs/outputs are declared correctly to avoid ordering issues.

Decoupled Projects

Gradle allows any project to access any other project during both the configuration and execution phases. While this provides a great deal of power and flexibility to the build author, it also limits the flexibility that Gradle has when building those projects. For instance, this effectively prevents Gradle from correctly building multiple projects in parallel, configuring only a subset of projects, or from substituting a pre-built artifact in place of a project dependency.

Two projects are said to be decoupled if they do not directly access each other’s project model. Decoupled projects may only interact in terms of declared dependencies: project dependencies (the section called “Project dependencies”) and/or task dependencies (the section called “Task dependencies”). Any other form of project interaction (i.e. by modifying another project object or by reading a value from another project object) causes the projects to be coupled. The consequence of coupling during the configuration phase is that if gradle is invoked with the 'configuration on demand' option, the result of the build can be flawed in several ways. The consequence of coupling during execution phase is that if gradle is invoked with the parallel option, one project task runs too late to influence a task of a project building in parallel. Gradle does not attempt to detect coupling and warn the user, as there are too many possibilities to introduce coupling.

A very common way for projects to be coupled is by using configuration injection (the section called “Cross project configuration”). It may not be immediately apparent, but using key Gradle features like the allprojects and subprojects keywords automatically cause your projects to be coupled. This is because these keywords are used in a build.gradle file, which defines a project. Often this is a “root project” that does nothing more than define common configuration, but as far as Gradle is concerned this root project is still a fully-fledged project, and by using allprojects that project is effectively coupled to all other projects. Coupling of the root project to subprojects does not impact 'configuration on demand', but using the allprojects and subprojects in any subproject’s build.gradle file will have an impact.

This means that using any form of shared build script logic or configuration injection (allprojects, subprojects, etc.) will cause your projects to be coupled. As we extend the concept of project decoupling and provide features that take advantage of decoupled projects, we will also introduce new features to help you to solve common use cases (like configuration injection) without causing your projects to be coupled.

In order to make good use of cross project configuration without running into issues for parallel and 'configuration on demand' options, follow these recommendations:

  • Avoid a subproject’s build.gradle referencing other subprojects; preferring cross configuration from the root project.

  • Avoid changing the configuration of other projects at execution time.

Multi-Project Building and Testing

The build task of the Java plugin is typically used to compile, test, and perform code style checks (if the CodeQuality plugin is used) of a single project. In multi-project builds you may often want to do all of these tasks across a range of projects. The buildNeeded and buildDependents tasks can help with this.

Look at Example 202. In this example, the “:services:personservice” project depends on both the “:api” and “:shared” projects. The “:api” project also depends on the “:shared” project.

Assume you are working on a single project, the “:api” project. You have been making changes, but have not built the entire project since performing a clean. You want to build any necessary supporting jars, but only perform code quality and unit tests on the project you have changed. The build task does this.

Example 204. Build and Test Single Project

Output of gradle :api:build

> gradle :api:build
:shared:compileJava
:shared:processResources
:shared:classes
:shared:jar
:api:compileJava
:api:processResources
:api:classes
:api:jar
:api:assemble
:api:compileTestJava
:api:processTestResources
:api:testClasses
:api:test
:api:check
:api:build

BUILD SUCCESSFUL in 0s
9 actionable tasks: 9 executed

While you are working in a typical development cycle repeatedly building and testing changes to the “:api” project (knowing that you are only changing files in this one project), you may not want to even suffer the expense of building “:shared:compile” to see what has changed in the “:shared” project. Adding the “-a” option will cause Gradle to use cached jars to resolve any project lib dependencies and not try to re-build the depended on projects.

Example 205. Partial Build and Test Single Project

Output of gradle -a :api:build

> gradle -a :api:build
:api:compileJava
:api:processResources
:api:classes
:api:jar
:api:assemble
:api:compileTestJava
:api:processTestResources
:api:testClasses
:api:test
:api:check
:api:build

BUILD SUCCESSFUL in 0s
6 actionable tasks: 6 executed

If you have just gotten the latest version of source from your version control system which included changes in other projects that “:api” depends on, you might want to not only build all the projects you depend on, but test them as well. The buildNeeded task also tests all the projects from the project lib dependencies of the testRuntime configuration.

Example 206. Build and Test Depended On Projects

Output of gradle :api:buildNeeded

> gradle :api:buildNeeded
:shared:compileJava
:shared:processResources
:shared:classes
:shared:jar
:api:compileJava
:api:processResources
:api:classes
:api:jar
:api:assemble
:api:compileTestJava
:api:processTestResources
:api:testClasses
:api:test
:api:check
:api:build
:shared:assemble
:shared:compileTestJava
:shared:processTestResources
:shared:testClasses
:shared:test
:shared:check
:shared:build
:shared:buildNeeded
:api:buildNeeded

BUILD SUCCESSFUL in 0s
12 actionable tasks: 12 executed

You also might want to refactor some part of the “:api” project that is used in other projects. If you make these types of changes, it is not sufficient to test just the “:api” project, you also need to test all projects that depend on the “:api” project. The buildDependents task also tests all the projects that have a project lib dependency (in the testRuntime configuration) on the specified project.

Example 207. Build and Test Dependent Projects

Output of gradle :api:buildDependents

> gradle :api:buildDependents
:shared:compileJava
:shared:processResources
:shared:classes
:shared:jar
:api:compileJava
:api:processResources
:api:classes
:api:jar
:api:assemble
:api:compileTestJava
:api:processTestResources
:api:testClasses
:api:test
:api:check
:api:build
:services:personService:compileJava
:services:personService:processResources
:services:personService:classes
:services:personService:jar
:services:personService:assemble
:services:personService:compileTestJava
:services:personService:processTestResources
:services:personService:testClasses
:services:personService:test
:services:personService:check
:services:personService:build
:services:personService:buildDependents
:api:buildDependents

BUILD SUCCESSFUL in 0s
17 actionable tasks: 17 executed

Finally, you may want to build and test everything in all projects. Any task you run in the root project folder will cause that same named task to be run on all the children. So you can just run “gradle build” to build and test all projects.

Multi Project and buildSrc

the section called “Build sources in the buildSrc project” tells us that we can place build logic to be compiled and tested in the special buildSrc directory. In a multi project build, there can only be one buildSrc directory which must be located in the root directory.

Property and method inheritance

Properties and methods declared in a project are inherited to all its subprojects. This is an alternative to configuration injection. But we think that the model of inheritance does not reflect the problem space of multi-project builds very well. In a future edition of this user guide we might write more about this.

Method inheritance might be interesting to use as Gradle’s Configuration Injection does not support methods yet (but will in a future release).

You might be wondering why we have implemented a feature we obviously don’t like that much. One reason is that it is offered by other tools and we want to have the check mark in a feature comparison :). And we like to offer our users a choice.

Summary

Writing this chapter was pretty exhausting and reading it might have a similar effect. Our final message for this chapter is that multi-project builds with Gradle are usually not difficult. There are five elements you need to remember: allprojects, subprojects, evaluationDependsOn, evaluationDependsOnChildren and project lib dependencies.[11] With those elements, and keeping in mind that Gradle has a distinct configuration and execution phase, you already have a lot of flexibility. But when you enter steep territory Gradle does not become an obstacle and usually accompanies and carries you to the top of the mountain.



[9] The real use case we had, was using http://lucene.apache.org/solr, where you need a separate war for each index you are accessing. That was one reason why we have created a distribution of webapps. The Resin servlet container allows us, to let such a distribution point to a base installation of the servlet container.

[10] We do this here, as it makes the layout a bit easier. We usually put the project specific stuff into the build script of the respective projects.

[11] So we are well in the range of the 7 plus 2 Rule :)

Using Gradle Plugins

Gradle at its core intentionally provides very little for real world automation. All of the useful features, like the ability to compile Java code, are added by plugins. Plugins add new tasks (e.g. JavaCompile), domain objects (e.g. SourceSet), conventions (e.g. Java source is located at src/main/java) as well as extending core objects and objects from other plugins.

In this chapter we discuss how to use plugins and the terminology and concepts surrounding plugins.

What plugins do

Applying a plugin to a project allows the plugin to extend the project’s capabilities. It can do things such as:

  • Extend the Gradle model (e.g. add new DSL elements that can be configured)

  • Configure the project according to conventions (e.g. add new tasks or configure sensible defaults)

  • Apply specific configuration (e.g. add organizational repositories or enforce standards)

By applying plugins, rather than adding logic to the project build script, we can reap a number of benefits. Applying plugins:

  • Promotes reuse and reduces the overhead of maintaining similar logic across multiple projects

  • Allows a higher degree of modularization, enhancing comprehensibility and organization

  • Encapsulates imperative logic and allows build scripts to be as declarative as possible

Types of plugins

There are two general types of plugins in Gradle, script plugins and binary plugins. Script plugins are additional build scripts that further configure the build and usually implement a declarative approach to manipulating the build. They are typically used within a build although they can be externalized and accessed from a remote location. Binary plugins are classes that implement the Plugin interface and adopt a programmatic approach to manipulating the build. Binary plugins can reside within a build script, within the project hierarchy or externally in a plugin jar.

A plugin often starts out as a script plugin (because they are easy to write) and then, as the code becomes more valuable, it’s migrated to a binary plugin that can be easily tested and shared between multiple projects or organizations.

Using plugins

To use the build logic encapsulated in a plugin, Gradle needs to perform two steps. First, it needs to resolve the plugin, and then it needs to apply the plugin to the target, usually a Project.

Resolving a plugin means finding the correct version of the jar which contains a given plugin and adding it the script classpath. Once a plugin is resolved, its API can be used in a build script. Script plugins are self-resolving in that they are resolved from the specific file path or URL provided when applying them. Core binary plugins provided as part of the Gradle distribution are automatically resolved.

Applying a plugin means actually executing the plugin’s Plugin.apply(T) on the Project you want to enhance with the plugin. Applying plugins is idempotent. That is, you can safely apply any plugin multiple times without side effects.

The most common use case for using a plugin is to both resolve the plugin and apply it to the current project. Since this is such a common use case, it’s recommended that build authors use the plugins DSL to both resolve and apply plugins in one step. The feature is technically still incubating, but it works well, and should be used by most users.

Script plugins

Example 208. Applying a script plugin

build.gradle

apply from: 'other.gradle'

Script plugins are automatically resolved and can be applied from a script on the local filesystem or at a remote location. Filesystem locations are relative to the project directory, while remote script locations are specified with an HTTP URL. Multiple script plugins (of either form) can be applied to a given target.

Binary plugins

You apply plugins by their plugin id, which is a globally unique identifier, or name, for plugins. Core Gradle plugins are special in that they provide short names, such as 'java' for the core JavaPlugin. All other binary plugins must use the fully qualified form of the plugin id (e.g. com.github.foo.bar), although some legacy plugins may still utilize a short, unqualified form. Where you put the plugin id depends on whether you are using the plugins DSL or the buildscript block.

Locations of binary plugins

A plugin is simply any class that implements the Plugin interface. Gradle provides the core plugins (e.g. JavaPlugin) as part of its distribution which means they are automatically resolved. However, non-core binary plugins need to be resolved before they can be applied. This can be achieved in a number of ways:

For more on defining your own plugins, see Writing Custom Plugins.

Applying plugins with the plugins DSL

The plugins DSL is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

The new plugins DSL provides a succinct and convenient way to declare plugin dependencies. It works with the Gradle plugin portal to provide easy access to both core and community plugins. The plugins DSL block configures an instance of PluginDependenciesSpec.

To apply a core plugin, the short name can be used:

Example 209. Applying a core plugin

build.gradle

plugins {
    id 'java'
}

To apply a community plugin from the portal, the fully qualified plugin id must be used:

Example 210. Applying a community plugin

build.gradle

plugins {
    id 'com.jfrog.bintray' version '0.4.1'
}

See PluginDependenciesSpec for more information on using the Plugin DSL.

Limitations of the plugins DSL

This way of adding plugins to a project is much more than a more convenient syntax. The plugins DSL is processed in a way which allows Gradle to determine the plugins in use very early and very quickly. This allows Gradle to do smart things such as:

  • Optimize the loading and reuse of plugin classes.

  • Allow different plugins to use different versions of dependencies.

  • Provide editors detailed information about the potential properties and values in the buildscript for editing assistance.

This requires that plugins be specified in a way that Gradle can easily and quickly extract, before executing the rest of the build script. It also requires that the definition of plugins to use be somewhat static.

There are some key differences between the new plugin mechanism and the “traditional” apply() method mechanism. There are also some constraints, some of which are temporary limitations while the mechanism is still being developed and some are inherent to the new approach.

Constrained Syntax

The new plugins {} block does not support arbitrary Groovy code. It is constrained, in order to be idempotent (produce the same result every time) and side effect free (safe for Gradle to execute at any time).

The form is:

plugins {
    id «plugin id» version «plugin version» [apply «false»]
}

Where «plugin version» and «plugin id» must be constant, literal, strings and the apply statement with a boolean can be used to disable the default behavior of applying the plugin immediately (e.g. you want to apply it only in subprojects). No other statements are allowed; their presence will cause a compilation error.

The plugins {} block must also be a top level statement in the buildscript. It cannot be nested inside another construct (e.g. an if-statement or for-loop).

Can only be used in build scripts

The plugins {} block can currently only be used in a project’s build script. It cannot be used in script plugins, the settings.gradle file or init scripts.

Future versions of Gradle will remove this restriction.

If the restrictions of the new syntax are prohibitive, the recommended approach is to apply plugins using the buildscript {} block.

Applying plugins to subprojects

If you have a multi-project build, you probably want to apply plugins to some or all of the subprojects in your build, but not to the root or master project. The default behavior of the plugins {} block is to immediately resolve and apply the plugins. But, you can use the apply false syntax to tell Gradle not to apply the plugin to the current project and then use apply plugin: «plugin id» in the subprojects block:

Example 211. Applying plugins only on certain subprojects.

settings.gradle

include 'helloA'
include 'helloB'
include 'goodbyeC'

build.gradle

plugins {
  id "org.gradle.sample.hello" version "1.0.0" apply false
  id "org.gradle.sample.goodbye" version "1.0.0" apply false
}

subprojects { subproject ->
    if (subproject.name.startsWith("hello")) {
        apply plugin: 'org.gradle.sample.hello'
    }
    if (subproject.name.startsWith("goodbye")) {
        apply plugin: 'org.gradle.sample.goodbye'
    }
}

If you then run gradle hello you’ll see that only the helloA and helloB subprojects had the hello plugin applied.

gradle/subprojects/docs/src/samples/plugins/multiproject $> gradle hello
Parallel execution is an incubating feature.
:helloA:hello
:helloB:hello
Hello!
Hello!

BUILD SUCCEEDED

Plugin Management

The pluginManagement {} DSL is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

Custom Plugin Repositories

By default, the plugins {} DSL resolves plugins from the public Gradle Plugin Portal. Many build authors would also like to resolve plugins from private Maven or Ivy repositories because the plugins contain proprietary implementation details, or just to have more control over what plugins are available to their builds.

To specify custom plugin repositories, use the repositories {} block inside pluginManagement {} in the settings.gradle file:

Example 212. Using plugins from custom plugin repositories.

settings.gradle

pluginManagement {
  repositories {
      maven {
        url 'maven-repo'
      }
      gradlePluginPortal()
      ivy {
        url 'ivy-repo'
      }
  }
}

This tells Gradle to first look in the Maven repository at maven-repo when resolving plugins and then to check the Gradle Plugin Portal if the plugins are not found in the Maven repository. If you don’t want the Gradle Plugin Portal to be searched, omit the gradlePluginPortal() line. Finally, the Ivy repository at ivy-repo will be checked.

Plugin Resolution Rules

Plugin resolution rules allow you to modify plugin requests made in plugins {} blocks, e.g. changing the requested version or explicitly specifying the implementation artifact coordinates.

To add resolution rules, use the resolutionStrategy {} inside the pluginManagement {} block:

Example 213. Plugin resolution strategy.

settings.gradle

pluginManagement {
  resolutionStrategy {
      eachPlugin {
          if (requested.id.namespace == 'org.gradle.sample') {
              useModule('org.gradle.sample:sample-plugins:1.0.0')
          }
      }
  }
  repositories {
      maven {
        url 'maven-repo'
      }
      gradlePluginPortal()
      ivy {
        url 'ivy-repo'
      }
  }
}

This tells Gradle to use the specified plugin implementation artifact instead of using its built-in default mapping from plugin ID to Maven/Ivy coordinates.

The pluginManagement {} block may only appear in the settings.gradle file, and must be the first block in the file. Custom Maven and Ivy plugin repositories must contain plugin marker artifacts in addition to the artifacts which actually implement the plugin. For more information on publishing plugins to custom repositories read Gradle Plugin Development Plugin.

See PluginManagementSpec for complete documentation for using the pluginManagement {} block.

Plugin Marker Artifacts

Since the plugins {} DSL block only allows for declaring plugins by their globally unique plugin id and version properties, Gradle needs a way to look up the coordinates of the plugin implementation artifact. To do so, Gradle will look for a Plugin Marker Artifact with the coordinates plugin.id:plugin.id.gradle.plugin:plugin.version. This marker needs to have a dependency on the actual plugin implementation. Publishing these markers is automated by the java-gradle-plugin.

For example, the following complete sample from the sample-plugins project shows how to publish a org.gradle.sample.hello plugin and a org.gradle.sample.goodbye plugin to both an Ivy and Maven repository using the combination of the java-gradle-plugin, the maven-publish plugin, and the ivy-publish plugin.

Example 214. Complete Plugin Publishing Sample

build.gradle

plugins {
  id 'java-gradle-plugin'
  id 'maven-publish'
  id 'ivy-publish'
}

group 'org.gradle.sample'
version '1.0.0'

gradlePlugin {
  plugins {
    hello {
      id = "org.gradle.sample.hello"
      implementationClass = "org.gradle.sample.hello.HelloPlugin"
    }
    goodbye {
      id = "org.gradle.sample.goodbye"
      implementationClass = "org.gradle.sample.goodbye.GoodbyePlugin"
    }
  }
}

publishing {
  repositories {
    maven {
      url "../consuming/maven-repo"
    }
    ivy {
      url "../consuming/ivy-repo"
    }
  }
}

Running gradle publish in the sample directory causes the following repo layouts to exist:

pluginMarkers

Legacy Plugin Application

With the introduction of the plugins DSL, users should have little reason to use the legacy method of applying plugins. It is documented here in case a build author cannot use the plugins DSL due to restrictions in how it currently works.

Applying Binary Plugins

Example 215. Applying a binary plugin

build.gradle

apply plugin: 'java'

Plugins can be applied using a plugin id. In the above case, we are using the short name ‘java’ to apply the JavaPlugin.

Rather than using a plugin id, plugins can also be applied by simply specifying the class of the plugin:

Example 216. Applying a binary plugin by type

build.gradle

apply plugin: JavaPlugin

The JavaPlugin symbol in the above sample refers to the JavaPlugin. This class does not strictly need to be imported as the org.gradle.api.plugins package is automatically imported in all build scripts (see the section called “Default imports”). Furthermore, it is not necessary to append .class to identify a class literal in Groovy as it is in Java.

Applying plugins with the buildscript block

Binary plugins that have been published as external jar files can be added to a project by adding the plugin to the build script classpath and then applying the plugin. External jars can be added to the build script classpath using the buildscript {} block as described in the section called “External dependencies for the build script”.

Example 217. Applying a plugin with the buildscript block

build.gradle

buildscript {
    repositories {
        jcenter()
    }
    dependencies {
        classpath "com.jfrog.bintray.gradle:gradle-bintray-plugin:0.4.1"
    }
}

apply plugin: "com.jfrog.bintray"

Finding community plugins

Gradle has a vibrant community of plugin developers who contribute plugins for a wide variety of capabilities. The Gradle plugin portal provides an interface for searching and exploring community plugins.

More on plugins

This chapter aims to serve as an introduction to plugins and Gradle and the role they play. For more information on the inner workings of plugins, see Writing Custom Plugins.

Standard Gradle plugins

There are a number of plugins included in the Gradle distribution. These are listed below.

Language plugins

These plugins add support for various languages which can be compiled for and executed in the JVM.

Table 13. Language plugins

Plugin Id Automatically applies Works with Description

java

java-base

-

Adds Java compilation, testing and bundling capabilities to a project. It serves as the basis for many of the other Gradle plugins. See also Java Quickstart.

groovy

java, groovy-base

-

Adds support for building Groovy projects. See also Groovy Quickstart.

scala

java, scala-base

-

Adds support for building Scala projects.

antlr

java

-

Adds support for generating parsers using Antlr.

Incubating language plugins

These plugins add support for various languages:

Table 14. Language plugins

Plugin Id Automatically applies Works with Description

assembler

-

-

Adds native assembly language capabilities to a project.

c

-

-

Adds C source compilation capabilities to a project.

cpp

-

-

Adds C++ source compilation capabilities to a project.

objective-c

-

-

Adds Objective-C source compilation capabilities to a project.

objective-cpp

-

-

Adds Objective-C++ source compilation capabilities to a project.

windows-resources

-

-

Adds support for including Windows resources in native binaries.

Integration plugins

These plugins provide some integration with various runtime technologies.

Table 15. Integration plugins

Plugin Id Automatically applies Works with Description

application

java, distribution

-

Adds tasks for running and bundling a Java project as a command-line application.

ear

-

java

Adds support for building J2EE applications.

maven

-

java, war

Adds support for publishing artifacts to Maven repositories.

osgi

java-base

java

Adds support for building OSGi bundles.

war

java

-

Adds support for assembling web application WAR files. See also Web Application Quickstart.

Incubating integration plugins

These plugins provide some integration with various runtime technologies.

Table 16. Incubating integration plugins

Plugin Id Automatically applies Works with Description

distribution

-

-

Adds support for building ZIP and TAR distributions.

java-library-distribution

java, distribution

-

Adds support for building ZIP and TAR distributions for a Java library.

ivy-publish

-

java, war

This plugin provides a new DSL to support publishing artifacts to Ivy repositories, which improves on the existing DSL.

maven-publish

-

java, war

This plugin provides a new DSL to support publishing artifacts to Maven repositories, which improves on the existing DSL.

Software development plugins

These plugins provide help with your software development process.

Table 17. Software development plugins

Plugin Id Automatically applies Works with Description

announce

-

-

Publish messages to your favourite platforms, such as Twitter or Growl.

build-announcements

announce

-

Sends local announcements to your desktop about interesting events in the build lifecycle.

checkstyle

java-base

-

Performs quality checks on your project’s Java source files using Checkstyle and generates reports from these checks.

codenarc

groovy-base

-

Performs quality checks on your project’s Groovy source files using CodeNarc and generates reports from these checks.

eclipse

-

java,groovy, scala

Generates files that are used by Eclipse IDE, thus making it possible to import the project into Eclipse. See also Java Quickstart.

eclipse-wtp

-

ear, war

Does the same as the eclipse plugin plus generates eclipse WTP (Web Tools Platform) configuration files. After importing to eclipse your war/ear projects should be configured to work with WTP. See also Java Quickstart.

findbugs

java-base

-

Performs quality checks on your project’s Java source files using FindBugs and generates reports from these checks.

idea

-

java

Generates files that are used by Intellij IDEA IDE, thus making it possible to import the project into IDEA.

jdepend

java-base

-

Performs quality checks on your project’s source files using JDepend and generates reports from these checks.

pmd

java-base

-

Performs quality checks on your project’s Java source files using PMD and generates reports from these checks.

project-report

reporting-base

-

Generates reports containing useful information about your Gradle build.

signing

base

-

Adds the ability to digitally sign built files and artifacts.

Incubating software development plugins

These plugins provide help with your software development process.

Table 18. Software development plugins

Plugin Id Automatically applies Works with Description

build-dashboard

reporting-base

-

Generates build dashboard report.

cunit

-

-

Adds support for running CUnit tests.

jacoco

reporting-base

java

Provides integration with the JaCoCo code coverage library for Java.

visual-studio

-

native language plugins

Adds integration with Visual Studio.

java-gradle-plugin

java

Assists with development of Gradle plugins by providing standard plugin build configuration and validation.

Base plugins

These plugins form the basic building blocks which the other plugins are assembled from. They are available for you to use in your build files, and are listed here for completeness. However, be aware that they are not yet considered part of Gradle’s public API. As such, these plugins are not documented in the user guide. You might refer to their API documentation to learn more about them.

Table 19. Base plugins

Plugin Id Description

base

Adds the standard lifecycle tasks and configures reasonable defaults for the archive tasks:

  • adds build ConfigurationName tasks. Those tasks assemble the artifacts belonging to the specified configuration.

  • adds upload ConfigurationName tasks. Those tasks assemble and upload the artifacts belonging to the specified configuration.

  • configures reasonable default values for all archive tasks (e.g. tasks that inherit from AbstractArchiveTask). For example, the archive tasks are tasks of types: Jar, Tar, Zip. Specifically, destinationDir, baseName and version properties of the archive tasks are preconfigured with defaults. This is extremely useful because it drives consistency across projects; the consistency regarding naming conventions of archives and their location after the build completed.

java-base

Adds the source sets concept to the project. Does not add any particular source sets.

groovy-base

Adds the Groovy source sets concept to the project.

scala-base

Adds the Scala source sets concept to the project.

reporting-base

Adds some shared convention properties to the project, relating to report generation.

Third party plugins

You can find a list of external plugins at the Gradle Plugins site.

The Project Report Plugin

The Project report plugin adds some tasks to your project which generate reports containing useful information about your build. These tasks generate the same content that you get by executing the tasks, dependencies, and properties tasks from the command line (see the section called “Project reporting”). In contrast to the command line reports, the report plugin generates the reports into a file. There is also an aggregating task that depends on all report tasks added by the plugin.

We plan to add much more to the existing reports and create additional ones in future releases of Gradle.

Usage

To use the Project report plugin, include the following in your build script:

apply plugin: 'project-report'

Tasks

The project report plugin defines the following tasks:

Table 20. Project report plugin - tasks

Task name Depends on Type Description

dependencyReport

-

DependencyReportTask

Generates the project dependency report.

htmlDependencyReport

-

HtmlDependencyReportTask

Generates an HTML dependency and dependency insight report for the project or a set of projects.

propertyReport

-

PropertyReportTask

Generates the project property report.

taskReport

-

TaskReportTask

Generates the project task report.

projectReport

dependencyReport, propertyReport, taskReport, htmlDependencyReport

Task

Generates all project reports.

Project layout

The project report plugin does not require any particular project layout.

Dependency management

The project report plugin does not define any dependency configurations.

Convention properties

The project report defines the following convention properties:

Table 21. Project report plugin - convention properties

Property name Type Default value Description

reportsDirName

String

reports

The name of the directory to generate reports into, relative to the build directory.

reportsDir

File (read-only)

buildDir/reportsDirName

The directory to generate reports into.

projects

Set<Project>

A one element set with the project the plugin was applied to.

The projects to generate the reports for.

projectReportDirName

String

project

The name of the directory to generate the project report into, relative to the reports directory.

projectReportDir

File (read-only)

reportsDir/projectReportDirName

The directory to generate the project report into.

These convention properties are provided by a convention object of type ProjectReportsPluginConvention.

The Build Dashboard Plugin

The build dashboard plugin is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

The Build Dashboard plugin can be used to generate a single HTML dashboard that provides a single point of access to all of the reports generated by a build.

Usage

To use the Build Dashboard plugin, include the following in your build script:

Example 218. Using the Build Dashboard plugin

build.gradle

apply plugin: 'build-dashboard'

Applying the plugin adds the buildDashboard task to your project. The task aggregates the reports for all tasks that implement the Reporting interface from all projects in the build. It is typically only applied to the root project.

The buildDashboard task does not depend on any other tasks. It will only aggregate the reporting tasks that are independently being executed as part of the build run. To generate the build dashboard, simply include this task in the list of tasks to execute. For example, “gradle buildDashboard build” will generate a dashboard for all of the reporting tasks that are dependents of the build task.

Tasks

The Build Dashboard plugin adds the following task to the project:

Table 22. Build Dashboard plugin - tasks

Task name Depends on Type Description

buildDashboard

-

GenerateBuildDashboard

Generates build dashboard report.

Project layout

The Build Dashboard plugin does not require any particular project layout.

Dependency management

The Build Dashboard plugin does not define any dependency configurations.

Configuration

You can influence the location of build dashboard plugin generation via ReportingExtension.

Comparing Builds

Build comparison support is an incubating feature. This means that it is incomplete and not yet at regular Gradle production quality. This also means that this Gradle User Guide chapter is a work in progress.

Gradle provides support for comparing the outcomes (e.g. the produced binary archives) of two builds. There are several reasons why you may want to compare the outcomes of two builds. You may want to compare:

  • A build with a newer version of Gradle than it’s currently using (i.e. upgrading the Gradle version).

  • A Gradle build with a build executed by another tool such as Apache Ant, Apache Maven or something else (i.e. migrating to Gradle).

  • The same Gradle build, with the same version, before and after a change to the build (i.e. testing build changes).

By comparing builds in these scenarios you can make an informed decision about the Gradle upgrade, migration to Gradle or build change by understanding the differences in the outcomes. The comparison process produces a HTML report outlining which outcomes were found to be identical and identifying the differences between non-identical outcomes.

Definition of terms

The following are the terms used for build comparison and their definitions.

“Build”

In the context of build comparison, a build is not necessarily a Gradle build. It can be any invokable “process” that produces observable “outcomes”. At least one of the builds in a comparison will be a Gradle build.

“Build Outcome”

Something that happens in an observable manner during a build, such as the creation of a zip file or test execution. These are the things that are compared.

“Source Build”

The build that comparisons are being made against, typically the build in its “current” state. In other words, the left hand side of the comparison.

“Target Build”

The build that is being compared to the source build, typically the “proposed” build. In other words, the right hand side of the comparison.

“Host Build”

The Gradle build that executes the comparison process. It may be the same project as either the “target” or “source” build or may be a completely separate project. It does not need to be the same Gradle version as the “source” or “target” builds. The host build must be run with Gradle 1.2 or newer.

“Compared Build Outcome”

Build outcomes that are intended to be logically equivalent in the “source” and “target” builds, and are therefore meaningfully comparable.

“Uncompared Build Outcome”

A build outcome is uncompared if a logical equivalent from the other build cannot be found (e.g. a build produces a zip file that the other build does not).

“Unknown Build Outcome”

A build outcome that cannot be understood by the host build. This can occur when the source or target build is a newer Gradle version than the host build and that Gradle version exposes new outcome types. Unknown build outcomes can be compared in so far as they can be identified to be logically equivalent to an unknown build outcome in the other build, but no meaningful comparison of what the build outcome actually is can be performed. Using the latest Gradle version for the host build will avoid encountering unknown build outcomes.

Current Capabilities

As this is an incubating feature, a limited set of the eventual functionality has been implemented at this time.

Supported builds

Only support for comparing Gradle builds is available at this time. Both the source and target build must execute with Gradle newer or equal to version 1.0. The host build must be at least version 1.2. If the host build is run with version 3.0 or newer, source and target builds must be at least version 1.2. If the host build is run with a version older than 2.0, source and target builds must be older than version 3.0. So if you for example want to compare a build under version 1.1 with a build under version 3.0, you have to execute the host build with a 2.x version.

Future versions will provide support for executing builds from other build systems such as Apache Ant or Apache Maven, as well as support for executing arbitrary processes (e.g. shell script based builds)

Supported build outcomes

Only support for comparing build outcomes that are zip archives is supported at this time. This includes jar, war and ear archives.

Future versions will provide support for comparing outcomes such as test execution (i.e. which tests were executed, which tests failed, etc.)

Comparing Gradle Builds

The compare-gradle-builds plugin can be used to facilitate a comparison between two Gradle builds. The plugin adds a CompareGradleBuilds task named “compareGradleBuilds” to the project. The configuration of this task specifies what is to be compared. By default, it is configured to compare the current build with itself using the current Gradle version by executing the tasks: “clean assemble”.

apply plugin: 'compare-gradle-builds'

This task can be configured to change what is compared.

compareGradleBuilds {
    sourceBuild {
        projectDir "/projects/project-a"
        gradleVersion "1.1"
    }
    targetBuild {
        projectDir "/projects/project-b"
        gradleVersion "1.2"
    }
}

The example above specifies a comparison between two different projects using two different Gradle versions.

Trying Gradle upgrades

You can use the build comparison functionality to very quickly try a new Gradle version with your build.

To try your current build with a different Gradle version, simply add the following to the build.gradle of the root project.

apply plugin: 'compare-gradle-builds'

compareGradleBuilds {
    targetBuild.gradleVersion = "«gradle version»"
}

Then simply execute the compareGradleBuilds task. You will see the console output of the “source” and “target” builds as they are executing.

The comparison “result”

If there are any differences between the compared outcomes, the task will fail. The location of the HTML report providing insight into the comparison will be given. If all compared outcomes are found to be identical, and there are no uncompared outcomes, and there are no unknown build outcomes, the task will succeed.

You can configure the task to not fail on compared outcome differences by setting the ignoreFailures property to true.

compareGradleBuilds {
    ignoreFailures = true
}

Which archives are compared?

For an archive to be a candidate for comparison, it must be added as an artifact of the archives configuration. Take a look at Publishing artifacts for more information on how to configure and add artifacts.

The archive must also have been produced by a Zip, Jar, War, Ear task. Future versions of Gradle will support increased flexibility in this area.

Publishing artifacts

This chapter describes the original publishing mechanism available in Gradle 1.0: in Gradle 1.3 a new mechanism for publishing was introduced. While this new mechanism is incubating and not yet complete, it introduces some new concepts and features that do (and will) make Gradle publishing even more powerful.

You can read about the new publishing plugins in Ivy Publishing (new) and Maven Publishing (new). Please try them out and give us feedback.

Introduction

This chapter is about how you declare the outgoing artifacts of your project, and how to work with them (e.g. upload them). We define the artifacts of the projects as the files the project provides to the outside world. This might be a library or a ZIP distribution or any other file. A project can publish as many artifacts as it wants.

Artifacts and configurations

Like dependencies, artifacts are grouped by configurations. In fact, a configuration can contain both artifacts and dependencies at the same time.

For each configuration in your project, Gradle provides the tasks uploadConfigurationName and buildConfigurationName.[12] Execution of these tasks will build or upload the artifacts belonging to the respective configuration.

the section called “Dependency configurations” shows the configurations added by the Java plugin. Two of the configurations are relevant for the usage with artifacts. The archives configuration is the standard configuration to assign your artifacts to. The Java plugin automatically assigns the default jar to this configuration. We will talk more about the runtime configuration in the section called “More about project libraries”. As with dependencies, you can declare as many custom configurations as you like and assign artifacts to them.

Declaring artifacts

Archive task artifacts

You can use an archive task to define an artifact:

Example 219. Defining an artifact using an archive task

build.gradle

task myJar(type: Jar)

artifacts {
    archives myJar
}

It is important to note that the custom archives you are creating as part of your build are not automatically assigned to any configuration. You have to explicitly do this assignment.

File artifacts

You can also use a file to define an artifact:

Example 220. Defining an artifact using a file

build.gradle

def someFile = file('build/somefile.txt')

artifacts {
    archives someFile
}

Gradle will figure out the properties of the artifact based on the name of the file. You can customize these properties:

Example 221. Customizing an artifact

build.gradle

task myTask(type:  MyTaskType) {
    destFile = file('build/somefile.txt')
}

artifacts {
    archives(myTask.destFile) {
        name 'my-artifact'
        type 'text'
        builtBy myTask
    }
}

There is a map-based syntax for defining an artifact using a file. The map must include a file entry that defines the file. The map may include other artifact properties:

Example 222. Map syntax for defining an artifact using a file

build.gradle

task generate(type:  MyTaskType) {
    destFile = file('build/somefile.txt')
}

artifacts {
    archives file: generate.destFile, name: 'my-artifact', type: 'text', builtBy: generate
}

Publishing artifacts

We have said that there is a specific upload task for each configuration. Before you can do an upload, you have to configure the upload task and define where to publish the artifacts to. The repositories you have defined (as described in Declaring Repositories) are not automatically used for uploading. In fact, some of those repositories only allow downloading artifacts, not uploading. Here is an example of how you can configure the upload task of a configuration:

Example 223. Configuration of the upload task

build.gradle

repositories {
    flatDir {
        name "fileRepo"
        dirs "repo"
    }
}

uploadArchives {
    repositories {
        add project.repositories.fileRepo
        ivy {
            credentials {
                username "username"
                password "pw"
            }
            url "http://repo.mycompany.com"
        }
    }
}

As you can see, you can either use a reference to an existing repository or create a new repository.

If an upload repository is defined with multiple patterns, Gradle must choose a pattern to use for uploading each file. By default, Gradle will upload to the pattern defined by the url parameter, combined with the optional layout parameter. If no url parameter is supplied, then Gradle will use the first defined artifactPattern for uploading, or the first defined ivyPattern for uploading Ivy files, if this is set.

Uploading to a Maven repository is described in the section called “Interacting with Maven repositories”.

More about project libraries

If your project is supposed to be used as a library, you need to define what are the artifacts of this library and what are the dependencies of these artifacts. The Java plugin adds a runtime configuration for this purpose, with the implicit assumption that the runtime dependencies are the dependencies of the artifact you want to publish. Of course this is fully customizable. You can add your own custom configuration or let the existing configurations extend from other configurations. You might have a different group of artifacts which have a different set of dependencies. This mechanism is very powerful and flexible.

If someone wants to use your project as a library, she simply needs to declare which configuration of the dependency to depend on. A Gradle dependency offers the configuration property to declare this. If this is not specified, the default configuration is used (see the section called “Defining the scope of a dependency with configurations”). Using your project as a library can either happen from within a multi-project build or by retrieving your project from a repository. In the latter case, an ivy.xml descriptor in the repository is supposed to contain all the necessary information. If you work with Maven repositories you don’t have the flexibility as described above. For how to publish to a Maven repository, see the section the section called “Interacting with Maven repositories”.



[12] To be exact, the Base plugin provides those tasks. This plugin is automatically applied if you use the Java plugin.

The Maven Plugin

This chapter is a work in progress

The Maven plugin adds support for deploying artifacts to Maven repositories.

Usage

To use the Maven plugin, include the following in your build script:

Example 224. Using the Maven plugin

build.gradle

apply plugin: 'maven'

Tasks

The Maven plugin defines the following tasks:

Table 23. Maven plugin - tasks

Task name Depends on Type Description

install

All tasks that build the associated archives.

Upload

Installs the associated artifacts to the local Maven cache, including Maven metadata generation. By default the install task is associated with the archives configuration. This configuration has by default only the default jar as an element. To learn more about installing to the local repository, see: the section called “Installing to the local repository”

Dependency management

The Maven plugin does not define any dependency configurations.

Convention properties

The Maven plugin defines the following convention properties:

Table 24. Maven plugin - properties

Property name Type Default value Description

mavenPomDir

File

${project.buildDir}/poms

The directory where the generated POMs are written to.

conf2ScopeMappings

Conf2ScopeMappingContainer

n/a

Instructions for mapping Gradle configurations to Maven scopes. See the section called “Dependency mapping”.

These properties are provided by a MavenPluginConvention convention object.

Convention methods

The maven plugin provides a factory method for creating a POM. This is useful if you need a POM without the context of uploading to a Maven repo.

Example 225. Creating a standalone pom.

build.gradle

task writeNewPom {
    doLast {
        pom {
            project {
                inceptionYear '2008'
                licenses {
                    license {
                        name 'The Apache Software License, Version 2.0'
                        url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
                        distribution 'repo'
                    }
                }
            }
        }.writeTo("$buildDir/newpom.xml")
    }
}

Amongst other things, Gradle supports the same builder syntax as polyglot Maven. To learn more about the Gradle Maven POM object, see MavenPom. See also: MavenPluginConvention

Interacting with Maven repositories

Introduction

With Gradle you can deploy to remote Maven repositories or install to your local Maven repository. This includes all Maven metadata manipulation and works also for Maven snapshots. In fact, Gradle’s deployment is 100 percent Maven compatible as we use the native Maven Ant tasks under the hood.

Deploying to a Maven repository is only half the fun if you don’t have a POM. Fortunately Gradle can generate this POM for you using the dependency information it has.

Deploying to a Maven repository

Let’s assume your project produces just the default jar file. Now you want to deploy this jar file to a remote Maven repository.

Example 226. Upload of file to remote Maven repository

build.gradle

apply plugin: 'maven'

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://localhost/tmp/myRepo/")
        }
    }
}

That is all. Calling the uploadArchives task will generate the POM and deploys the artifact and the POM to the specified repository.

There is more work to do if you need support for protocols other than file. In this case the native Maven code we delegate to needs additional libraries. Which libraries are needed depends on what protocol you plan to use. The available protocols and the corresponding libraries are listed in Table 25 (those libraries have transitive dependencies which have transitive dependencies).[13] For example, to use the ssh protocol you can do:

Example 227. Upload of file via SSH

build.gradle

configurations {
    deployerJars
}

repositories {
    mavenCentral()
}

dependencies {
    deployerJars "org.apache.maven.wagon:wagon-ssh:2.2"
}

uploadArchives {
    repositories.mavenDeployer {
        configuration = configurations.deployerJars
        repository(url: "scp://repos.mycompany.com/releases") {
            authentication(userName: "me", password: "myPassword")
        }
    }
}

There are many configuration options for the Maven deployer. The configuration is done via a Groovy builder. All the elements of this tree are Java beans. To configure the simple attributes you pass a map to the bean elements. To add bean elements to its parent, you use a closure. In the example above repository and authentication are such bean elements. Table 26 lists the available bean elements and a link to the Javadoc of the corresponding class. In the Javadoc you can see the possible attributes you can set for a particular element.

In Maven you can define repositories and optionally snapshot repositories. If no snapshot repository is defined, releases and snapshots are both deployed to the repository element. Otherwise snapshots are deployed to the snapshotRepository element.

Table 25. Protocol jars for Maven deployment

Protocol Library

http

org.apache.maven.wagon:wagon-http:2.2

ssh

org.apache.maven.wagon:wagon-ssh:2.2

ssh-external

org.apache.maven.wagon:wagon-ssh-external:2.2

ftp

org.apache.maven.wagon:wagon-ftp:2.2

webdav

org.apache.maven.wagon:wagon-webdav:1.0-beta-2

file

-

Installing to the local repository

The Maven plugin adds an install task to your project. This task depends on all the archives task of the archives configuration. It installs those archives to your local Maven repository. If the default location for the local repository is redefined in a Maven settings.xml, this is considered by this task.

Maven POM generation

When deploying an artifact to a Maven repository, Gradle automatically generates a POM for it. The groupId, artifactId, version and packaging elements used for the POM default to the values shown in the table below. The dependency elements are created from the project’s dependency declarations.

Table 27. Default Values for Maven POM generation

Maven Element Default Value

groupId

project.group

artifactId

uploadTask.repositories.mavenDeployer.pom.artifactId (if set) or archiveTask.baseName.

version

project.version

packaging

archiveTask.extension

Here, uploadTask and archiveTask refer to the tasks used for uploading and generating the archive, respectively (for example uploadArchives and jar). archiveTask.baseName defaults to project.archivesBaseName which in turn defaults to project.name.

When you set the “archiveTask.baseName” property to a value other than the default, you’ll also have to set uploadTask.repositories.mavenDeployer.pom.artifactId to the same value. Otherwise, the project at hand may be referenced with the wrong artifact ID from generated POMs for other projects in the same build.

Generated POMs can be found in <buildDir>/poms. They can be further customized via the MavenPom API. For example, you might want the artifact deployed to the Maven repository to have a different version or name than the artifact generated by Gradle. To customize these you can do:

Example 228. Customization of pom

build.gradle

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://localhost/tmp/myRepo/")
            pom.version = '1.0Maven'
            pom.artifactId = 'myMavenName'
        }
    }
}

To add additional content to the POM, the pom.project builder can be used. With this builder, any element listed in the Maven POM reference can be added.

Example 229. Builder style customization of pom

build.gradle

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://localhost/tmp/myRepo/")
            pom.project {
                licenses {
                    license {
                        name 'The Apache Software License, Version 2.0'
                        url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
                        distribution 'repo'
                    }
                }
            }
        }
    }
}

Note: groupId, artifactId, version, and packaging should always be set directly on the pom object.

Example 230. Modifying auto-generated content

build.gradle

def installer = install.repositories.mavenInstaller
def deployer = uploadArchives.repositories.mavenDeployer

[installer, deployer]*.pom*.whenConfigured {pom ->
    pom.dependencies.find {dep -> dep.groupId == 'group3' && dep.artifactId == 'runtime' }.optional = true
}

If you have more than one artifact to publish, things work a little bit differently. See the section called “Multiple artifacts per project”.

To customize the settings for the Maven installer (see the section called “Installing to the local repository”), you can do:

Example 231. Customization of Maven installer

build.gradle

install {
    repositories.mavenInstaller {
        pom.version = '1.0Maven'
        pom.artifactId = 'myName'
    }
}

Multiple artifacts per project

Maven can only deal with one artifact per project. This is reflected in the structure of the Maven POM. We think there are many situations where it makes sense to have more than one artifact per project. In such a case you need to generate multiple POMs. In such a case you have to explicitly declare each artifact you want to publish to a Maven repository. The MavenDeployer and the MavenInstaller both provide an API for this:

Example 232. Generation of multiple poms

build.gradle

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://localhost/tmp/myRepo/")
            addFilter('api') {artifact, file ->
                artifact.name == 'api'
            }
            addFilter('service') {artifact, file ->
                artifact.name == 'service'
            }
            pom('api').version = 'mySpecialMavenVersion'
        }
    }
}

You need to declare a filter for each artifact you want to publish. This filter defines a boolean expression for which Gradle artifact it accepts. Each filter has a POM associated with it which you can configure. To learn more about this have a look at PomFilterContainer and its associated classes.

Dependency mapping

The Maven plugin configures the default mapping between the Gradle configurations added by the Java and War plugin and the Maven scopes. Most of the time you don’t need to touch this and you can safely skip this section. The mapping works like the following. You can map a configuration to one and only one scope. Different configurations can be mapped to one or different scopes. You can also assign a priority to a particular configuration-to-scope mapping. Have a look at Conf2ScopeMappingContainer to learn more. To access the mapping configuration you can say:

Example 233. Accessing a mapping configuration

build.gradle

task mappings {
    doLast {
        println conf2ScopeMappings.mappings
    }
}

Gradle exclude rules are converted to Maven excludes if possible. Such a conversion is possible if in the Gradle exclude rule the group as well as the module name is specified (as Maven needs both in contrast to Ivy). Per-configuration excludes are also included in the Maven POM, if they are convertible.



[13] It is planned for a future release to provide out-of-the-box support for this

The Signing Plugin

The signing plugin adds the ability to digitally sign built files and artifacts. These digital signatures can then be used to prove who built the artifact the signature is attached to as well as other information such as when the signature was generated.

The signing plugin currently only provides support for generating OpenPGP signatures (which is the signature format required for publication to the Maven Central Repository).

Usage

To use the Signing plugin, include the following in your build script:

Example 234. Using the Signing plugin

build.gradle

apply plugin: 'signing'

Signatory credentials

In order to create OpenPGP signatures, you will need a key pair (instructions on creating a key pair using the GnuPG tools can be found in the GnuPG HOWTOs). You need to provide the signing plugin with your key information, which means three things:

  • The public key ID (an 8 character hexadecimal string).

  • The absolute path to the secret key ring file containing your private key.

  • The passphrase used to protect your private key.

These items must be supplied as the values of properties signing.keyId, signing.secretKeyRingFile, and signing.password respectively. Given the personal and private nature of these values, a good practice is to store them in the user gradle.properties file (described in the section called “System properties”).

signing.keyId=24875D73
signing.password=secret
signing.secretKeyRingFile=/Users/me/.gnupg/secring.gpg

If specifying this information (especially signing.password) in the user gradle.properties file is not feasible for your environment, you can source the information however you need to and set the project properties manually.

import org.gradle.plugins.signing.Sign

gradle.taskGraph.whenReady { taskGraph ->
    if (taskGraph.allTasks.any { it instanceof Sign }) {
        // Use Java 6's console to read from the console (no good for
        // a CI environment)
        Console console = System.console()
        console.printf "\n\nWe have to sign some things in this build." +
                       "\n\nPlease enter your signing details.\n\n"

        def id = console.readLine("PGP Key Id: ")
        def file = console.readLine("PGP Secret Key Ring File (absolute path): ")
        def password = console.readPassword("PGP Private Key Password: ")

        allprojects { ext."signing.keyId" = id }
        allprojects { ext."signing.secretKeyRingFile" = file }
        allprojects { ext."signing.password" = password }

        console.printf "\nThanks.\n\n"
    }
}

Note that the presence of a null value for any these three properties will cause an exception.

Using OpenPGP subkeys

OpenPGP supports subkeys, which are like the normal keys, except they’re bound to a master key pair. One feature of OpenPGP subkeys is that they can be revoked independently of the master keys which makes key management easier. A practical case study of how subkeys can be leveraged in software development can be read on the Debian wiki.

The signing plugin supports OpenPGP subkeys out of the box. Just specify a subkey ID as the value in the signing.keyId property.

Using gpg-agent

By default the signing plugin uses a Java-based implementation of PGP for signing. This implementation cannot use the gpg-agent program for managing private keys, though. If you want to use the gpg-agent, you can change the signatory implementation used by the signing plugin:

Example 235. Sign with GnuPG

build.gradle

signing {
    useGpgCmd()
    sign configurations.archives
}

This tells the signing plugin to use the GnupgSignatory instead of the default PgpSignatory. The GnupgSignatory relies on the gpg2 program to sign the artifacts. Of course, this requires that GnuPG is installed.

Without any further configuration the gpg2 (on Windows: gpg2.exe) executable found on the PATH will be used. The password is supplied by the gpg-agent and the default key is used for signing.

Gnupg signatory configuration

The GnupgSignatory supports a number of configuration options for controlling how gpg is invoked. These are typically set in gradle.properties:

Example 236. Configure the GnupgSignatory

gradle.properties

signing.gnupg.executable=gpg
signing.gnupg.useLegacyGpg=true
signing.gnupg.homeDir=gnupg-home
signing.gnupg.optionsFile=gnupg-home/gpg.conf
signing.gnupg.keyName=24875D73
signing.gnupg.passphrase=gradle

signing.gnupg.executable

The gpg executable that is invoked for signing. The default value of this property depends on useLegacyGpg. If that is true then the default value of executable is "gpg" otherwise it is "gpg2".

signing.gnupg.useLegacyGpg

Must be true if GnuPG version 1 is used and false otherwise. The default value of the property is false.

signing.gnupg.homeDir

Sets the home directory for GnuPG. If not given the default home directory of GnuPG is used.

signing.gnupg.optionsFile

Sets a custom options file for GnuPG. If not given GnuPG’s default configuration file is used.

signing.gnupg.keyName

The id of the key that should be used for signing. If not given then the default key configured in GnuPG will be used.

signing.gnupg.passphrase

The passphrase for unlocking the secret key. If not given then the gpg-agent program is used for getting the passphrase.

All configuration properties are optional.

Specifying what to sign

As well as configuring how things are to be signed (i.e. the signatory configuration), you must also specify what is to be signed. The Signing plugin provides a DSL that allows you to specify the tasks and/or configurations that should be signed.

Signing Configurations

It is common to want to sign the artifacts of a configuration. For example, the Java plugin configures a jar to build and this jar artifact is added to the archives configuration. Using the Signing DSL, you can specify that all of the artifacts of this configuration should be signed.

Example 237. Signing a configuration

build.gradle

signing {
    sign configurations.archives
}

This will create a task (of type Sign) in your project named “signArchives”, that will build any archives artifacts (if needed) and then generate signatures for them. The signature files will be placed alongside the artifacts being signed.

Example 238. Signing a configuration output

Output of gradle signArchives

> gradle signArchives
:compileJava
:processResources
:classes
:jar
:signArchives

BUILD SUCCESSFUL in 0s
4 actionable tasks: 4 executed

Signing Tasks

In some cases the artifact that you need to sign may not be part of a configuration. In this case you can directly sign the task that produces the artifact to sign.

Example 239. Signing a task

build.gradle

task stuffZip (type: Zip) {
    baseName = "stuff"
    from "src/stuff"
}

signing {
    sign stuffZip
}

This will create a task (of type Sign) in your project named “signStuffZip”, that will build the input task’s archive (if needed) and then sign it. The signature file will be placed alongside the artifact being signed.

Example 240. Signing a task output

Output of gradle signStuffZip

> gradle signStuffZip
:stuffZip
:signStuffZip

BUILD SUCCESSFUL in 0s
2 actionable tasks: 2 executed

For a task to be “signable”, it must produce an archive of some type. Tasks that do this are the Tar, Zip, Jar, War and Ear tasks.

Conditional Signing

A common usage pattern is to only sign build artifacts under certain conditions. For example, you may not wish to sign artifacts for non-release versions. To achieve this, you can specify that signing is only required under certain conditions.

Example 241. Conditional signing

build.gradle

version = '1.0-SNAPSHOT'
ext.isReleaseVersion = !version.endsWith("SNAPSHOT")

signing {
    required { isReleaseVersion && gradle.taskGraph.hasTask("uploadArchives") }
    sign configurations.archives
}

In this example, we only want to require signing if we are building a release version and we are going to publish it. Because we are inspecting the task graph to determine if we are going to be publishing, we must set the signing.required property to a closure to defer the evaluation. See SigningExtension.setRequired(java.lang.Object) for more information.

Publishing the signatures

When specifying what is to be signed via the Signing DSL, the resultant signature artifacts are automatically added to the signatures and archives dependency configurations. This means that if you want to upload your signatures to your distribution repository along with the artifacts you simply execute the uploadArchives task as normal.

Signing POM files

Signing the generated POM file generated by the Maven Publishing plugin is currently not supported. Future versions of Gradle might add this functionality.

When deploying signatures for your artifacts to a Maven repository, you will also want to sign the published POM file. The signing plugin adds a signing.signPom() (see: SigningExtension.signPom(org.gradle.api.artifacts.maven.MavenDeployment, groovy.lang.Closure)) method that can be used in the beforeDeployment() block in your upload task configuration.

Example 242. Signing a POM for deployment

build.gradle

uploadArchives {
    repositories {
        mavenDeployer {
            beforeDeployment { MavenDeployment deployment -> signing.signPom(deployment) }
        }
    }
}

When signing is not required and the POM cannot be signed due to insufficient configuration (i.e. no credentials for signing) then the signPom() method will silently do nothing.

Ivy Publishing (new)

This chapter describes the new incubating Ivy publishing support provided by the “ivy-publish” plugin. Eventually this new publishing support will replace publishing via the Upload task.

If you are looking for documentation on the original Ivy publishing support using the Upload task please see Publishing artifacts.

This chapter describes how to publish build artifacts in the Apache Ivy format, usually to a repository for consumption by other builds or projects. What is published is one or more artifacts created by the build, and an Ivy module descriptor (normally ivy.xml) that describes the artifacts and the dependencies of the artifacts, if any.

A published Ivy module can be consumed by Gradle (see Declaring Dependencies) and other tools that understand the Ivy format.

The “ivy-publish” Plugin

The ability to publish in the Ivy format is provided by the “ivy-publish” plugin.

The “publishing” plugin creates an extension on the project named “publishing” of type PublishingExtension. This extension provides a container of named publications and a container of named repositories. The “ivy-publish” plugin works with IvyPublication publications and IvyArtifactRepository repositories.

Example 243. Applying the “ivy-publish” plugin

build.gradle

apply plugin: 'ivy-publish'

Applying the “ivy-publish” plugin does the following:

Publications

If you are not familiar with project artifacts and configurations, you should read Publishing artifacts, which introduces these concepts. This chapter also describes “publishing artifacts” using a different mechanism than what is described in this chapter. The publishing functionality described here will eventually supersede that functionality.

Publication objects describe the structure/configuration of a publication to be created. Publications are published to repositories via tasks, and the configuration of the publication object determines exactly what is published. All of the publications of a project are defined in the PublishingExtension.getPublications() container. Each publication has a unique name within the project.

For the “ivy-publish” plugin to have any effect, an IvyPublication must be added to the set of publications. This publication determines which artifacts are actually published as well as the details included in the associated Ivy module descriptor file. A publication can be configured by adding components, customizing artifacts, and by modifying the generated module descriptor file directly.

Publishing a Software Component

The simplest way to publish a Gradle project to an Ivy repository is to specify a SoftwareComponent to publish. The components presently available for publication are:

Table 28. Software Components

Name Provided By Artifacts Dependencies

java

Java Plugin

Generated jar file

Dependencies from 'runtime' configuration

web

War Plugin

Generated war file

No dependencies

In the following example, artifacts and runtime dependencies are taken from the java component, which is added by the Java Plugin.

Example 244. Publishing a Java module to Ivy

build.gradle

publications {
    ivyJava(IvyPublication) {
        from components.java
    }
}

Publishing custom artifacts

It is also possible to explicitly configure artifacts to be included in the publication. Artifacts are commonly supplied as raw files, or as instances of AbstractArchiveTask (e.g. Jar, Zip).

For each custom artifact, it is possible to specify the name, extension, type, classifier and conf values to use for publication. Note that each artifacts must have a unique name/classifier/extension combination.

Configure custom artifacts as follows:

Example 245. Publishing additional artifact to Ivy

build.gradle

task sourceJar(type: Jar) {
    from sourceSets.main.java
    classifier "source"
}
publishing {
    publications {
        ivy(IvyPublication) {
            from components.java
            artifact(sourceJar) {
                type "source"
                conf "compile"
            }
        }
    }
}

See the IvyPublication class in the API documentation for more detailed information on how artifacts can be customized.

Identity values for the published project

The generated Ivy module descriptor file contains an <info> element that identifies the module. The default identity values are derived from the following:

Overriding the default identity values is easy: simply specify the organisation, module or revision attributes when configuring the IvyPublication. The status and branch attributes can be set via the descriptor property (see IvyModuleDescriptorSpec). The descriptor property can also be used to add additional custom elements as children of the <info> element.

Example 246. customizing the publication identity

build.gradle

publishing {
    publications {
        ivy(IvyPublication) {
            organisation 'org.gradle.sample'
            module 'project1-sample'
            revision '1.1'
            descriptor.status = 'milestone'
            descriptor.branch = 'testing'
            descriptor.extraInfo 'http://my.namespace', 'myElement', 'Some value'

            from components.java
        }
    }
}

Certain repositories are not able to handle all supported characters. For example, the ':' character cannot be used as an identifier when publishing to a filesystem-backed repository on Windows.

Gradle will handle any valid Unicode character for organisation, module and revision (as well as artifact name, extension and classifier). The only values that are explicitly prohibited are ‘\’, ‘/’ and any ISO control character. The supplied values are validated early during publication.

Modifying the generated module descriptor

At times, the module descriptor file generated from the project information will need to be tweaked before publishing. The “ivy-publish” plugin provides a hook to allow such modification.

Example 247. Customizing the module descriptor file

build.gradle

publications {
    ivyCustom(IvyPublication) {
        descriptor.withXml {
            asNode().info[0].appendNode('description',
                                        'A demonstration of ivy descriptor customization')
        }
    }
}

In this example we are simply adding a 'description' element to the generated Ivy dependency descriptor, but this hook allows you to modify any aspect of the generated descriptor. For example, you could replace the version range for a dependency with the actual version used to produce the build.

See IvyModuleDescriptorSpec.withXml(org.gradle.api.Action) in the API documentation for more information.

It is possible to modify virtually any aspect of the created descriptor should you need to. This means that it is also possible to modify the descriptor in such a way that it is no longer a valid Ivy module descriptor, so care must be taken when using this feature.

The identifier (organisation, module, revision) of the published module is an exception; these values cannot be modified in the descriptor using the withXML hook.

Publishing multiple modules

Sometimes it’s useful to publish multiple modules from your Gradle build, without creating a separate Gradle subproject. An example is publishing a separate API and implementation jar for your library. With Gradle this is simple:

Example 248. Publishing multiple modules from a single project

build.gradle

task apiJar(type: Jar) {
    baseName "publishing-api"
    from sourceSets.main.output
    exclude '**/impl/**'
}
publishing {
    publications {
        impl(IvyPublication) {
            organisation 'org.gradle.sample.impl'
            module 'project2-impl'
            revision '2.3'

            from components.java
        }
        api(IvyPublication) {
            organisation 'org.gradle.sample'
            module 'project2-api'
            revision '2'
        }
    }
}

If a project defines multiple publications then Gradle will publish each of these to the defined repositories. Each publication must be given a unique identity as described above.

Repositories

Publications are published to repositories. The repositories to publish to are defined by the PublishingExtension.getRepositories() container.

Example 249. Declaring repositories to publish to

build.gradle

repositories {
    ivy {
        // change to point to your repo, e.g. http://my.org/repo
        url "$buildDir/repo"
    }
}

The DSL used to declare repositories for publishing is the same DSL that is used to declare repositories for dependencies (RepositoryHandler). However, in the context of Ivy publication only the repositories created by the ivy() methods can be used as publication destinations. You cannot publish an IvyPublication to a Maven repository for example.

Performing a publish

The “ivy-publish” plugin automatically creates a PublishToIvyRepository task for each IvyPublication and IvyArtifactRepository combination in the publishing.publications and publishing.repositories containers respectively.

The created task is named “publish«PUBNAME»PublicationTo«REPONAME»Repository”, which is “publishIvyJavaPublicationToIvyRepository” for this example. This task is of type PublishToIvyRepository.

Example 250. Choosing a particular publication to publish

build.gradle

apply plugin: 'java'
apply plugin: 'ivy-publish'

group = 'org.gradle.sample'
version = '1.0'

publishing {
    publications {
        ivyJava(IvyPublication) {
            from components.java
        }
    }
    repositories {
        ivy {
            // change to point to your repo, e.g. http://my.org/repo
            url "$buildDir/repo"
        }
    }
}

Output of gradle publishIvyJavaPublicationToIvyRepository

> gradle publishIvyJavaPublicationToIvyRepository
:generateDescriptorFileForIvyJavaPublication
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
:jar
:publishIvyJavaPublicationToIvyRepository

BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 executed

The “publish” lifecycle task

The “publish” plugin (that the “ivy-publish” plugin implicitly applies) adds a lifecycle task that can be used to publish all publications to all applicable repositories named “publish”.

In more concrete terms, executing this task will execute all PublishToIvyRepository tasks in the project. This is usually the most convenient way to perform a publish.

Example 251. Publishing all publications via the “publish” lifecycle task

Output of gradle publish

> gradle publish
:generateDescriptorFileForIvyJavaPublication
:compileJava NO-SOURCE
:processResources NO-SOURCE
:classes UP-TO-DATE
:jar
:publishIvyJavaPublicationToIvyRepository
:publish

BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 executed

Generating the Ivy module descriptor file without publishing

At times it is useful to generate the Ivy module descriptor file (normally ivy.xml) without publishing your module to an Ivy repository. Since descriptor file generation is performed by a separate task, this is very easy to do.

The “ivy-publish” plugin creates one GenerateIvyDescriptor task for each registered IvyPublication, named “generateDescriptorFileFor«PUBNAME»Publication”, which will be “generateDescriptorFileForIvyJavaPublication” for the previous example of the “ivyJava” publication.

You can specify where the generated Ivy file will be located by setting the destination property on the generated task. By default this file is written to “build/publications/«PUBNAME»/ivy.xml”.

Example 252. Generating the Ivy module descriptor file

build.gradle

model {
    tasks.generateDescriptorFileForIvyCustomPublication {
        destination = file("$buildDir/generated-ivy.xml")
    }
}

Output of gradle generateDescriptorFileForIvyCustomPublication

> gradle generateDescriptorFileForIvyCustomPublication
:generateDescriptorFileForIvyCustomPublication

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

The “ivy-publish” plugin leverages some experimental support for late plugin configuration, and the GenerateIvyDescriptor task will not be constructed until the publishing extension is configured. The simplest way to ensure that the publishing plugin is configured when you attempt to access the GenerateIvyDescriptor task is to place the access inside a model block, as the example above demonstrates.

The same applies to any attempt to access publication-specific tasks like PublishToIvyRepository. These tasks should be referenced from within a model block.

Complete example

The following example demonstrates publishing with a multi-project build. Each project publishes a Java component and a configured additional source artifact. The descriptor file is customized to include the project description for each project.

Example 253. Publishing a Java module

build.gradle

subprojects {
    apply plugin: 'java'
    apply plugin: 'ivy-publish'

    version = '1.0'
    group = 'org.gradle.sample'

    repositories {
        mavenCentral()
    }
    task sourceJar(type: Jar) {
        from sourceSets.main.java
        classifier "source"
    }
}

project(":project1") {
    description = "The first project"

    dependencies {
       compile 'junit:junit:4.12', project(':project2')
    }
}

project(":project2") {
    description = "The second project"

    dependencies {
       compile 'commons-collections:commons-collections:3.2.2'
    }
}

subprojects {
    publishing {
        repositories {
            ivy {
                // change to point to your repo, e.g. http://my.org/repo
                url "${rootProject.buildDir}/repo"
            }
        }
        publications {
            ivy(IvyPublication) {
                from components.java
                artifact(sourceJar) {
                    type "source"
                    conf "compile"
                }
                descriptor.withXml {
                    asNode().info[0].appendNode('description', description)
                }
            }
        }
    }
}

The result is that the following artifacts will be published for each project:

  • The Ivy module descriptor file: “ivy-1.0.xml”.

  • The primary “jar” artifact for the Java component: “project1-1.0.jar”.

  • The source “jar” artifact that has been explicitly configured: “project1-1.0-source.jar”.

When project1 is published, the module descriptor (i.e. the ivy.xml file) that is produced will look like:

Note that «PUBLICATION-TIME-STAMP» in this example Ivy module descriptor will be the timestamp of when the descriptor was generated.

Example 254. Example generated ivy.xml

output-ivy.xml

<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0">
  <info organisation="org.gradle.sample" module="project1" revision="1.0" status="integration" publication="«PUBLICATION-TIME-STAMP»">
    <description>The first project</description>
  </info>
  <configurations>
    <conf name="compile" visibility="public"/>
    <conf name="default" visibility="public" extends="compile,runtime"/>
    <conf name="runtime" visibility="public"/>
  </configurations>
  <publications>
    <artifact name="project1" type="jar" ext="jar" conf="compile"/>
    <artifact name="project1" type="source" ext="jar" conf="compile" m:classifier="source" xmlns:m="http://ant.apache.org/ivy/maven"/>
  </publications>
  <dependencies>
    <dependency org="junit" name="junit" rev="4.12" conf="compile-&gt;default"/>
    <dependency org="org.gradle.sample" name="project2" rev="1.0" conf="compile-&gt;default"/>
  </dependencies>
</ivy-module>

Future features

The “ivy-publish” plugin functionality as described above is incomplete, as the feature is still incubating. In upcoming Gradle releases, the functionality will be expanded to include (but not limited to):

  • Convenient customization of module attributes (module, organisation etc.)

  • Convenient customization of dependencies reported in module descriptor.

  • Multiple discrete publications per project

Maven Publishing (new)

This chapter describes the new incubating Maven publishing support provided by the “maven-publish” plugin. Eventually this new publishing support will replace publishing via the Upload task.

Note: Signing the generated POM file generated by this plugin is currently not supported. Future versions of Gradle might add this functionality. Please use the Maven plugin for the purpose of publishing your artifacts to Maven Central.

If you are looking for documentation on the original Maven publishing support using the Upload task please see Publishing artifacts.

This chapter describes how to publish build artifacts to an Apache Maven Repository. A module published to a Maven repository can be consumed by Maven, Gradle (see Declaring Dependencies) and other tools that understand the Maven repository format.

The “maven-publish” Plugin

The ability to publish in the Maven format is provided by the “maven-publish” plugin.

The “publishing” plugin creates an extension on the project named “publishing” of type PublishingExtension. This extension provides a container of named publications and a container of named repositories. The “maven-publish” plugin works with MavenPublication publications and MavenArtifactRepository repositories.

Example 255. Applying the 'maven-publish' plugin

build.gradle

apply plugin: 'maven-publish'

Applying the “maven-publish” plugin does the following:

Publications

If you are not familiar with project artifacts and configurations, you should read the Publishing artifacts that introduces these concepts. This chapter also describes “publishing artifacts” using a different mechanism than what is described in this chapter. The publishing functionality described here will eventually supersede that functionality.

Publication objects describe the structure/configuration of a publication to be created. Publications are published to repositories via tasks, and the configuration of the publication object determines exactly what is published. All of the publications of a project are defined in the PublishingExtension.getPublications() container. Each publication has a unique name within the project.

For the “maven-publish” plugin to have any effect, a MavenPublication must be added to the set of publications. This publication determines which artifacts are actually published as well as the details included in the associated POM file. A publication can be configured by adding components, customizing artifacts, and by modifying the generated POM file directly.

Publishing a Software Component

The simplest way to publish a Gradle project to a Maven repository is to specify a SoftwareComponent to publish. The components presently available for publication are:

Table 29. Software Components

Name Provided By Artifacts Dependencies

java

The Java Plugin

Generated jar file

Dependencies from 'runtime' configuration

web

The War Plugin

Generated war file

No dependencies

In the following example, artifacts and runtime dependencies are taken from the java component, which is added by the Java Plugin.

Example 256. Adding a MavenPublication for a Java component

build.gradle

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java
        }
    }
}

Publishing custom artifacts

It is also possible to explicitly configure artifacts to be included in the publication. Artifacts are commonly supplied as raw files, or as instances of AbstractArchiveTask (e.g. Jar, Zip).

For each custom artifact, it is possible to specify the extension and classifier values to use for publication. Note that only one of the published artifacts can have an empty classifier, and all other artifacts must have a unique classifier/extension combination.

Configure custom artifacts as follows:

Example 257. Adding additional artifact to a MavenPublication

build.gradle

task sourceJar(type: Jar) {
    from sourceSets.main.allJava
}

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java

            artifact sourceJar {
                classifier "sources"
            }
        }
    }
}

See the MavenPublication class in the API documentation for more information about how artifacts can be customized.

Identity values in the generated POM

The attributes of the generated POM file will contain identity values derived from the following project properties:

Overriding the default identity values is easy: simply specify the groupId, artifactId or version attributes when configuring the MavenPublication.

Example 258. customizing the publication identity

build.gradle

publishing {
    publications {
        maven(MavenPublication) {
            groupId 'org.gradle.sample'
            artifactId 'project1-sample'
            version '1.1'

            from components.java
        }
    }
}

Certain repositories will not be able to handle all supported characters. For example, the ':' character cannot be used as an identifier when publishing to a filesystem-backed repository on Windows.

Maven restricts 'groupId' and 'artifactId' to a limited character set ([A-Za-z0-9_\\-.]+) and Gradle enforces this restriction. For 'version' (as well as artifact 'extension' and 'classifier'), Gradle will handle any valid Unicode character.

The only Unicode values that are explicitly prohibited are ‘\’, ‘/’ and any ISO control character. Supplied values are validated early in publication.

Modifying the generated POM

The generated POM file may need to be tweaked before publishing. The “maven-publish” plugin provides a hook to allow such modification.

Example 259. Modifying the POM file

build.gradle

publications {
    mavenCustom(MavenPublication) {
        pom.withXml {
            asNode().appendNode('description',
                                'A demonstration of maven POM customization')
        }
    }
}

In this example we are adding a 'description' element for the generated POM. With this hook, you can modify any aspect of the POM. For example, you could replace the version range for a dependency with the actual version used to produce the build.

See MavenPom.withXml(org.gradle.api.Action) in the API documentation for more information.

It is possible to modify virtually any aspect of the created POM. This means that it is also possible to modify the POM in such a way that it is no longer a valid Maven POM, so care must be taken when using this feature.

The identifier (groupId, artifactId, version) of the published module is an exception; these values cannot be modified in the POM using the withXML hook.

Publishing multiple modules

Sometimes it’s useful to publish multiple modules from your Gradle build, without creating a separate Gradle subproject. An example is publishing a separate API and implementation jar for your library. With Gradle this is simple:

Example 260. Publishing multiple modules from a single project

build.gradle

task apiJar(type: Jar) {
    baseName "publishing-api"
    from sourceSets.main.output
    exclude '**/impl/**'
}

publishing {
    publications {
        impl(MavenPublication) {
            groupId 'org.gradle.sample.impl'
            artifactId 'project2-impl'
            version '2.3'

            from components.java
        }
        api(MavenPublication) {
            groupId 'org.gradle.sample'
            artifactId 'project2-api'
            version '2'

            artifact apiJar
        }
    }
}

If a project defines multiple publications then Gradle will publish each of these to the defined repositories. Each publication must be given a unique identity as described above.

Repositories

Publications are published to repositories. The repositories to publish to are defined by the PublishingExtension.getRepositories() container.

Example 261. Declaring repositories to publish to

build.gradle

publishing {
    repositories {
        maven {
            // change to point to your repo, e.g. http://my.org/repo
            url "$buildDir/repo"
        }
    }
}

The DSL used to declare repositories for publication is the same DSL that is used to declare repositories to consume dependencies from, RepositoryHandler. However, in the context of Maven publication only MavenArtifactRepository repositories can be used for publication.

Performing a publish

The “maven-publish” plugin automatically creates a PublishToMavenRepository task for each MavenPublication and MavenArtifactRepository combination in the publishing.publications and publishing.repositories containers respectively.

The created task is named “publish«PUBNAME»PublicationTo«REPONAME»Repository”.

Example 262. Publishing a project to a Maven repository

build.gradle

apply plugin: 'java'
apply plugin: 'maven-publish'

group = 'org.gradle.sample'
version = '1.0'

publishing {
    publications {
        mavenJava(MavenPublication) {
            from components.java
        }
    }
}
publishing {
    repositories {
        maven {
            // change to point to your repo, e.g. http://my.org/repo
            url "$buildDir/repo"
        }
    }
}

Output of gradle publish

> gradle publish
:generatePomFileForMavenJavaPublication
:compileJava
:processResources NO-SOURCE
:classes
:jar
:publishMavenJavaPublicationToMavenRepository
:publish

BUILD SUCCESSFUL in 0s
4 actionable tasks: 4 executed

In this example, a task named “publishMavenJavaPublicationToMavenRepository” is created, which is of type PublishToMavenRepository. This task is wired into the publish lifecycle task. Executing “gradle publish” builds the POM file and all of the artifacts to be published, and transfers them to the repository.

Publishing to Maven Local

For integration with a local Maven installation, it is sometimes useful to publish the module into the local .m2 repository. In Maven parlance, this is referred to as 'installing' the module. The “maven-publish” plugin makes this easy to do by automatically creating a PublishToMavenLocal task for each MavenPublication in the publishing.publications container. Each of these tasks is wired into the publishToMavenLocal lifecycle task. You do not need to have mavenLocal in your publishing.repositories section.

The created task is named “publish«PUBNAME»PublicationToMavenLocal”.

Example 263. Publish a project to the Maven local repository

Output of gradle publishToMavenLocal

> gradle publishToMavenLocal
:generatePomFileForMavenJavaPublication
:compileJava
:processResources NO-SOURCE
:classes
:jar
:publishMavenJavaPublicationToMavenLocal
:publishToMavenLocal

BUILD SUCCESSFUL in 0s
4 actionable tasks: 4 executed

The resulting task in this example is named “publishMavenJavaPublicationToMavenLocal”. This task is wired into the publishToMavenLocal lifecycle task. Executing “gradle publishToMavenLocal” builds the POM file and all of the artifacts to be published, and “installs” them into the local Maven repository.

Generating the POM file without publishing

At times it is useful to generate a Maven POM file for a module without actually publishing. Since POM generation is performed by a separate task, it is very easy to do so.

The task for generating the POM file is of type GenerateMavenPom, and it is given a name based on the name of the publication: “generatePomFileFor«PUBNAME»Publication”. So in the example below, where the publication is named “mavenCustom”, the task will be named “generatePomFileForMavenCustomPublication”.

Example 264. Generate a POM file without publishing

build.gradle

model {
    tasks.generatePomFileForMavenCustomPublication {
        destination = file("$buildDir/generated-pom.xml")
    }
}

Output of gradle generatePomFileForMavenCustomPublication

> gradle generatePomFileForMavenCustomPublication
:generatePomFileForMavenCustomPublication

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

All details of the publishing model are still considered in POM generation, including components, custom artifacts, and any modifications made via pom.withXml.

The “maven-publish” plugin leverages some experimental support for late plugin configuration, and any GenerateMavenPom tasks will not be constructed until the publishing extension is configured. The simplest way to ensure that the publishing plugin is configured when you attempt to access the GenerateMavenPom task is to place the access inside a model block, as the example above demonstrates.

The same applies to any attempt to access publication-specific tasks like PublishToMavenRepository. These tasks should be referenced from within a model block.

The Distribution Plugin

The distribution plugin is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

The distribution plugin facilitates building archives that serve as distributions of the project. Distribution archives typically contain the executable application and other supporting files, such as documentation.

Usage

To use the distribution plugin, include the following in your build script:

Example 265. Using the distribution plugin

build.gradle

apply plugin: 'distribution'

The plugin adds an extension named “distributions” of type DistributionContainer to the project. It also creates a single distribution in the distributions container extension named “main”. If your build only produces one distribution you only need to configure this distribution (or use the defaults).

You can run “gradle distZip” to package the main distribution as a ZIP, or “gradle distTar” to create a TAR file. To build both types of archives just run gradle assembleDist. The files will be created at “$buildDir/distributions/$project.name-$project.version.«ext»”.

You can run “gradle installDist” to assemble the uncompressed distribution into “$buildDir/install/main”.

Tasks

The Distribution plugin adds the following tasks to the project:

Table 30. Distribution plugin - tasks

Task name Depends on Type Description

distZip

-

Zip

Creates a ZIP archive of the distribution contents

distTar

-

Tar

Creates a TAR archive of the distribution contents

assembleDist

distTar, distZip

Task

Creates ZIP and TAR archives with the distribution contents

installDist

-

Sync

Assembles the distribution content and installs it on the current machine

For each extra distribution set you add to the project, the distribution plugin adds the following tasks:

Table 31. Multiple distributions - tasks

Task name Depends on Type Description

${distribution.name}DistZip

-

Zip

Creates a ZIP archive of the distribution contents

${distribution.name}DistTar

-

Tar

Creates a TAR archive of the distribution contents

assemble${distribution.name.capitalize()}Dist

${distribution.name}DistTar, ${distribution.name}DistZip

Task

Assembles all distribution archives

install${distribution.name.capitalize()}Dist

-

Sync

Assembles the distribution content and installs it on the current machine

Example 266. Adding extra distributions

build.gradle

apply plugin: 'distribution'

version = '1.2'
distributions {
    custom {}
}

This will add following tasks to the project:

  • customDistZip

  • customDistTar

  • assembleCustomDist

  • installCustomDist

Given that the project name is “myproject” and version “1.2”, running “gradle customDistZip” will produce a ZIP file named “myproject-custom-1.2.zip”.

Running “gradle installCustomDist” will install the distribution contents into “$buildDir/install/custom”.

Distribution contents

All of the files in the “src/$distribution.name/dist” directory will automatically be included in the distribution. You can add additional files by configuring the Distribution object that is part of the container.

Example 267. Configuring the main distribution

build.gradle

apply plugin: 'distribution'

distributions {
    main {
        baseName = 'someName'
        contents {
            from { 'src/readme' }
        }
    }
}

apply plugin:'maven'

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://some/repo")
        }
    }
}


In the example above, the content of the “src/readme” directory will be included in the distribution (along with the files in the “src/main/dist” directory which are added by default).

The “baseName” property has also been changed. This will cause the distribution archives to be created with a different name.

Publishing distributions

The distribution plugin adds the distribution archives as candidate for default publishing artifacts. With the maven plugin applied the distribution zip file will be published when running uploadArchives if no other default artifact is configured

Example 268. publish main distribution

build.gradle

apply plugin:'maven'

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "file://some/repo")
        }
    }
}

The Announce Plugin

The Gradle announce plugin allows you to send custom announcements during a build. The following notification systems are supported:

Usage

To use the announce plugin, apply it to your build script:

Example 269. Applying the announce plugin

build.gradle

apply plugin: 'announce'

Next, configure your notification service(s) of choice (see table below for which configuration properties are available):

Example 270. Configure the announce plugin

build.gradle

announce {  
  username = 'myId'
  password = 'myPassword'
}

Finally, send announcements with the announce method:

Example 271. Using the announce plugin

build.gradle

task helloWorld {
    doLast {
        println "Hello, world!"
    }
}  

helloWorld.doLast {  
    announce.announce("helloWorld completed!", "twitter")
    announce.announce("helloWorld completed!", "local")
}

The announce method takes two String arguments: The message to be sent, and the notification service to be used. The following table lists supported notification services and their configuration properties.

Table 32. Announce Plugin Notification Services

Notification Service Operating System Configuration Properties Further Information

twitter

Any

username, password

snarl

Windows

growl

macOS

notify-send

Ubuntu

Requires the notify-send package to be installed. Use sudo apt-get install libnotify-bin to install it.

local

Windows, macOS, Ubuntu

Automatically chooses between snarl, growl, and notify-send depending on the current operating system.

Configuration

See the AnnouncePluginExtension class in the API documentation.

The Build Announcements Plugin

The build announcements plugin is currently incubating. Please be aware that the DSL and other configuration may change in later Gradle versions.

The build announcements plugin uses the announce plugin to send local announcements on important events in the build.

Usage

To use the build announcements plugin, include the following in your build script:

Example 272. Using the build announcements plugin

build.gradle

apply plugin: 'build-announcements'

That’s it. If you want to tweak where the announcements go, you can configure the announce plugin to change the local announcer.

You can also apply the plugin from an init script:

Example 273. Using the build announcements plugin from an init script

init.gradle

rootProject {
    apply plugin: 'build-announcements'
}

Dependency management

Introduction to Dependency Management

Dependency management is a critical feature of every build, and Gradle has placed an emphasis on offering first-class dependency management that is both easy to understand and compatible with a wide variety of approaches. If you are familiar with the approach used by either Maven or Ivy you will be delighted to learn that Gradle is fully compatible with both approaches in addition to being flexible enough to support fully-customized approaches.

Here are the major highlights of Gradle’s support for dependency management:

  • Transitive dependency management: Gradle gives you full control of your project’s dependency tree.

  • Support for non-managed dependencies: If your dependencies are simply files in version control or a shared drive, Gradle provides powerful functionality to support this.

  • Support for custom dependency definitions.: Gradle’s Module Dependencies give you the ability to describe the dependency hierarchy in the build script.

  • A fully customizable approach to Dependency Resolution: Gradle provides you with the ability to customize resolution rules making dependency substitution easy.

  • Full Compatibility with Maven and Ivy: If you have defined dependencies in a Maven POM or an Ivy file, Gradle provides seamless integration with a range of popular build tools.

  • Integration with existing dependency management infrastructure: Gradle is compatible with both Maven and Ivy repositories. If you use Archiva, Nexus, or Artifactory, Gradle is 100% compatible with all repository formats.

With hundreds of thousands of interdependent open source components each with a range of versions and incompatibilities, dependency management has a habit of causing problems as builds grow in complexity. When a build’s dependency tree becomes unwieldy, your build tool shouldn’t force you to adopt a single, inflexible approach to dependency management. A proper build system has to be designed to be flexible, and Gradle can handle any situation.

Flexible dependency management for migrations

Dependency management can be particularly challenging during a migration from one build system to another. If you are migrating from a tool like Ant or Maven to Gradle, you may be faced with some difficult situations. For example, one common pattern is an Ant project with version-less jar files stored in the filesystem. Other build systems require a wholesale replacement of this approach before migrating. With Gradle, you can adapt your new build to any existing source of dependencies or dependency metadata. This makes incremental migration to Gradle much easier than the alternative. On most large projects, build migrations and any change to development process is incremental because most organizations can’t afford to stop everything and migrate to a build tool’s idea of dependency management.

Even if your project is using a custom dependency management system or something like an Eclipse .classpath file as master data for dependency management, it is very easy to write a Gradle plugin to use this data in Gradle. For migration purposes this is a common technique with Gradle. (But, once you’ve migrated, it might be a good idea to move away from a .classpath file and use Gradle’s dependency management features directly.)

Dependency management and Java

It is ironic that in a language known for its rich library of open source components that Java has no concept of libraries or versions. In Java, there is no standard way to tell the JVM that you are using version 3.0.5 of Hibernate, and there is no standard way to say that foo-1.0.jar depends on bar-2.0.jar. This has led to external solutions often based on build tools. The most popular ones at the moment are Maven and Ivy. While Maven provides a complete build system, Ivy focuses solely on dependency management.

Both tools rely on descriptor XML files, which contain information about the dependencies of a particular jar. Both also use repositories where the actual jars are placed together with their descriptor files, and both offer resolution for conflicting jar versions in one form or the other. Both have emerged as standards for solving dependency conflicts, and while Gradle originally used Ivy under the hood for its dependency management. Gradle has replaced this direct dependency on Ivy with a native Gradle dependency resolution engine which supports a range of approaches to dependency resolution including both POM and Ivy descriptor files.

How dependency resolution works

Gradle takes your dependency declarations and repository definitions and attempts to download all of your dependencies by a process called dependency resolution. Below is a brief outline of how this process works.

  • Given a required dependency, Gradle first attempts to resolve the module for that dependency. Each repository is inspected in order, searching first for a module descriptor file (POM or Ivy file) that indicates the presence of that module. If no module descriptor is found, Gradle will search for the presence of the primary module artifact file indicating that the module exists in the repository.

    • If the dependency is declared as a dynamic version (like 1.+), Gradle will resolve this to the newest available static version (like 1.2) in the repository. For Maven repositories, this is done using the maven-metadata.xml file, while for Ivy repositories this is done by directory listing.

    • If the module descriptor is a POM file that has a parent POM declared, Gradle will recursively attempt to resolve each of the parent modules for the POM.

  • Once each repository has been inspected for the module, Gradle will choose the 'best' one to use. This is done using the following criteria:

    • For a dynamic version, a 'higher' static version is preferred over a 'lower' version.

    • Modules declared by a module descriptor file (Ivy or POM file) are preferred over modules that have an artifact file only.

    • Modules from earlier repositories are preferred over modules in later repositories.

    • When the dependency is declared by a static version and a module descriptor file is found in a repository, there is no need to continue searching later repositories and the remainder of the process is short-circuited.

  • All of the artifacts for the module are then requested from the same repository that was chosen in the process above.

The dependency cache

Gradle contains a highly sophisticated dependency caching mechanism, which seeks to minimise the number of remote requests made in dependency resolution, while striving to guarantee that the results of dependency resolution are correct and reproducible.

The Gradle dependency cache consists of 2 key types of storage:

  • A file-based store of downloaded artifacts, including binaries like jars as well as raw downloaded meta-data like POM files and Ivy files. The storage path for a downloaded artifact includes the SHA1 checksum, meaning that 2 artifacts with the same name but different content can easily be cached.

  • A binary store of resolved module meta-data, including the results of resolving dynamic versions, module descriptors, and artifacts.

Separating the storage of downloaded artifacts from the cache metadata permits us to do some very powerful things with our cache that would be difficult with a transparent, file-only cache layout.

The Gradle cache does not allow the local cache to hide problems and create other mysterious and difficult to debug behavior that has been a challenge with many build tools. This new behavior is implemented in a bandwidth and storage efficient way. In doing so, Gradle enables reliable and reproducible enterprise builds.

Separate metadata cache

Gradle keeps a record of various aspects of dependency resolution in binary format in the metadata cache. The information stored in the metadata cache includes:

  • The result of resolving a dynamic version (e.g. 1.+) to a concrete version (e.g. 1.2).

  • The resolved module metadata for a particular module, including module artifacts and module dependencies.

  • The resolved artifact metadata for a particular artifact, including a pointer to the downloaded artifact file.

  • The absence of a particular module or artifact in a particular repository, eliminating repeated attempts to access a resource that does not exist.

Every entry in the metadata cache includes a record of the repository that provided the information as well as a timestamp that can be used for cache expiry.

Repository caches are independent

As described above, for each repository there is a separate metadata cache. A repository is identified by its URL, type and layout. If a module or artifact has not been previously resolved from this repository, Gradle will attempt to resolve the module against the repository. This will always involve a remote lookup on the repository, however in many cases no download will be required (see the section called “Artifact reuse”, below).

Dependency resolution will fail if the required artifacts are not available in any repository specified by the build, even if the local cache has a copy of this artifact which was retrieved from a different repository. Repository independence allows builds to be isolated from each other in an advanced way that no build tool has done before. This is a key feature to create builds that are reliable and reproducible in any environment.

Artifact reuse

Before downloading an artifact, Gradle tries to determine the checksum of the required artifact by downloading the sha file associated with that artifact. If the checksum can be retrieved, an artifact is not downloaded if an artifact already exists with the same id and checksum. If the checksum cannot be retrieved from the remote server, the artifact will be downloaded (and ignored if it matches an existing artifact).

As well as considering artifacts downloaded from a different repository, Gradle will also attempt to reuse artifacts found in the local Maven Repository. If a candidate artifact has been downloaded by Maven, Gradle will use this artifact if it can be verified to match the checksum declared by the remote server.

Checksum based storage

It is possible for different repositories to provide a different binary artifact in response to the same artifact identifier. This is often the case with Maven SNAPSHOT artifacts, but can also be true for any artifact which is republished without changing its identifier. By caching artifacts based on their SHA1 checksum, Gradle is able to maintain multiple versions of the same artifact. This means that when resolving against one repository Gradle will never overwrite the cached artifact file from a different repository. This is done without requiring a separate artifact file store per repository.

Cache Locking

The Gradle dependency cache uses file-based locking to ensure that it can safely be used by multiple Gradle processes concurrently. The lock is held whenever the binary meta-data store is being read or written, but is released for slow operations such as downloading remote artifacts.

Declaring Dependencies

Gradle builds can declare dependencies on external binaries, raw files and other Gradle projects. You can find examples for common scenarios in this section. For more information, see the full reference on all types of dependencies.

Declaring a binary dependency

Modern software projects rarely build code in isolation. Projects reference external libraries for the purpose of reusing existing and proven functionality, so-called binary dependencies. Upon resolution binary dependencies are downloaded from dedicated repositories and stored in a cache to avoid unnecessary network traffic.

Figure 6. Resolving binary dependencies from remote repositories

Resolving binary dependencies from remote repositories

Declaring a concrete version of a binary dependency

A typical example for such a library in a Java project is the Spring framework. The following code snippet declares a compile-time dependency on the Spring web module by its coordinates: org.springframework:spring-web:5.0.2.RELEASE. Gradle resolves the dependency including its transitive dependencies from the Maven Central repository and uses it to compile Java source code. The version attribute of the dependency coordinates points to a concrete version indicating that the underlying artifacts don’t change over time. The use of concrete versions ensures reproducibility for the aspect of dependency resolution.

Example 274. Declaring a binary dependencies with a concrete version

build.gradle

apply plugin: 'java-library'

repositories {
    mavenCentral()
}

dependencies {
    implementation 'org.springframework:spring-web:5.0.2.RELEASE'
}

A Gradle project can define other types of repositories hosting binary dependencies. You can learn more about the syntax and API in the section on declaring repositories. Refer to The Java Plugin for a deep dive on declaring dependencies for a Java project. The resolution behavior for binary dependencies declarations is highly customizable.

Declaring a dynamic version of a binary dependency

Projects might adopt a more aggressive approach for consuming binary dependencies. For example you might want to always integrate the latest version of a dependency to consume cutting edge features at any given time. A dynamic version allows for resolving the latest version or the latest version of a version range for a given dependency.

Using dynamic versions in a build bears the risk of potentially breaking it. As soon as a new version of the dependency is released that contains an incompatible API change your source code might stop compiling.

Example 275. Declaring a binary dependencies with a dynamic version

build.gradle

apply plugin: 'java-library'

repositories {
    mavenCentral()
}

dependencies {
    implementation 'org.springframework:spring-web:5.+'
}

A build scan can effectively visualize dynamic dependency versions and their respective, selected versions.

Figure 7. Dynamic dependencies in build scan

Dynamic dependencies in build scan

By default, Gradle caches dynamic versions of dependencies for 24 hours. The threshold can be configured as needed for example if you want to resolve new versions earlier.

Declaring a changing version of a binary dependency

A team might decide to implement a series of features before releasing a new version of the application or library. A common strategy to allow consumers to integrate an unfinished version of their artifacts early and often is to release a so-called changing version. A changing version indicates that the feature set is still under active development and hasn’t released a stable version for general availability yet.

In Maven repositories, changing versions are commonly referred to as snapshot versions. Snapshot versions contain the suffix -SNAPSHOT. The following example demonstrates how to declare a snapshot version on the Spring dependency.

Example 276. Declaring a binary dependencies with a changing version

build.gradle

apply plugin: 'java-library'

repositories {
    mavenCentral()
    maven {
        url 'https://repo.spring.io/snapshot/'
    }
}

dependencies {
    implementation 'org.springframework:spring-web:5.0.3.BUILD-SNAPSHOT'
}

By default, Gradle caches changing versions of dependencies for 24 hours. The threshold can be configured as needed for example if you want to resolve new snapshot versions earlier.

Gradle is flexible enough to treat any version as changing version. All you need to do is to set the property ExternalModuleDependency.setChanging(boolean) to true.

Declaring a file dependency

Projects sometimes do not rely on a binary repository product e.g. JFrog Artifactory or Sonatype Nexus for hosting and resolving external dependencies. It’s common practice to host those dependencies on a shared drive or check them into version control alongside the project source code. Those dependencies are referred to as file dependencies, the reason being that they represent a files without any metadata (like information about transitive dependencies, the origin or its author) attached to them.

Figure 8. Resolving file dependencies from the local file system and a shared drive

Resolving file dependencies from the local file system and a shared drive

The following example resolves file dependencies from the directories ant, libs and tools.

Example 277. Declaring multiple file dependencies

build.gradle

configurations {
    antContrib
    externalLibs
    deploymentTools
}

dependencies {
    antContrib files('ant/antcontrib.jar')
    externalLibs files('libs/commons-lang.jar', 'libs/log4j.jar')
    deploymentTools fileTree(dir: 'tools', include: '*.exe')
}

As you can see in the code example, every dependency has to define its exact location in the file system. The most prominent methods for creating a file reference are Project.files(java.lang.Object[]) and Project.fileTree(java.lang.Object). Alternatively, you can also define the source directory of one or many file dependencies in the form of a flat directory repository.

Declaring a project dependency

Software projects often break up software components into modules to improve maintainability and prevent strong coupling. Modules can define dependencies between each other to reuse code within the same project.

Gradle can model dependencies between modules. Those dependencies are called project dependencies because each module is represented by a Gradle project. At runtime, the build automatically ensures that project dependencies are built in the correct order and added to the classpath for compilation. The chapter Authoring Multi-Project Builds discusses how to set up and configure multi-project builds in more detail.

Figure 9. Dependencies between projects

Dependencies between projects

The following example declares the dependencies on the utils and api project from the web-service project. The method Project.project(java.lang.String) creates a reference to a specific subproject by path.

Example 278. Declaring project dependencies

build.gradle

project(':web-service') {
    dependencies {
        implementation project(':utils')
        implementation project(':api')
    }
}

Defining the scope of a dependency with configurations

What is a configuration?

Every dependency declared for a Gradle project applies to a specific scope. For example some dependencies should be used for compiling source code whereas others only need to be available at runtime. Gradle represents the scope of a dependency with the help of a Configuration.

Many Gradle plugins add pre-defined configurations to your project. The Java plugin, for example, adds configurations to represent the various classpaths it needs for source code compilation, executing tests and the like. See the Java plugin chapter for an example. The sections above demonstrate how to declare dependencies for different use cases.

Figure 10. Configurations use declared dependencies for specific purposes

Configurations use declared dependencies for specific purposes

Defining custom configurations

You can also define configurations yourself, so-called custom configurations. A custom configuration is useful for separating the scope of dependencies needed for a dedicated purpose.

Let’s say you wanted to declare a dependency on the Jasper Ant task for the purpose of pre-compiling JSP files that should not end up in the classpath for compiling your source code. It’s fairly simply to achieve that goal by introducing a custom configuration and using it in a task.

Example 279. Declaring and using a custom configuration

build.gradle

configurations {
    jasper
}

repositories {
    mavenCentral()
}

dependencies {
    jasper 'org.apache.tomcat.embed:tomcat-embed-jasper:9.0.2'
}

task preCompileJsps {
    doLast {
        ant.taskdef(classname: 'org.apache.jasper.JspC',
                    name: 'jasper',
                    classpath: configurations.jasper.asPath)
        ant.jasper(validateXml: false,
                   uriroot: file('src/main/webapp'),
                   outputDir: file("$buildDir/compiled-jsps"))
    }
}

A project’s configurations are managed by a configurations object. Configurations have a name and can extend each other. To learn more about this API have a look at ConfigurationContainer.

Resolving specific artifacts from a module dependency

Whenever Gradle tries to resolve a dependency from a Maven or Ivy repository, it looks for a metadata file and the default artifact file, a JAR. The build fails if none of these artifact files can be resolved. Under certain conditions, you might want to tweak the way Gradle resolves artifacts for a dependency.

  • The dependency only provides a non-standard artifact without any metadata e.g. a ZIP file.

  • The dependency metadata declares more than one artifact e.g. as part of an Ivy dependency descriptor.

  • You only want to download a specific artifact without any of the transitive dependencies declared in the metadata.

Gradle is a polyglot build tool and not limited to just resolving Java libraries. Let’s assume you wanted to build a web application using JavaScript as the client technology. Most projects check in external JavaScript libraries into version control. An external JavaScript library is no different than a reusable Java library so why not download it from a repository instead?

Google Hosted Libraries is a distribution platform for popular, open-source JavaScript libraries. With the help of the artifact-only notation you can download a JavaScript library file e.g. JQuery. The @ character separates the dependency’s coordinates from the artifact’s file extension.

Example 280. Resolving a JavaScript artifact for a declared dependency

build.gradle

repositories {
    ivy {
        url 'https://ajax.googleapis.com/ajax/libs'
        layout 'pattern', {
            artifact '[organization]/[revision]/[module].[ext]'
        }
    }
}

configurations {
    js
}

dependencies {
    js 'jquery:jquery:3.2.1@js'
}

Some dependencies ship different "flavors" of the same artifact or they publish multiple artifacts that belong to a specific version of the dependency but have a different purpose. It’s common for a Java library to publish the artifact with the compiled class files, another one with just the source code in it and a third one containing the Javadocs.

In JavaScript, a library may exist as uncompressed or minified artifact. In Gradle, a specific artifact identifier is called classifier, a term generally used in Maven and Ivy dependency management.

Let’s say we wanted to download the minified artifact of the JQuery library instead of the uncompressed file. You can provide the classifier min as part of the dependency declaration.

Example 281. Resolving a JavaScript artifact with classifier for a declared dependency

build.gradle

repositories {
    ivy {
        url 'https://ajax.googleapis.com/ajax/libs'
        layout 'pattern', {
            artifact '[organization]/[revision]/[module](.[classifier]).[ext]'
        }
    }
}

configurations {
    js
}

dependencies {
    js 'jquery:jquery:3.2.1:min@js'
}

Declaring Repositories

Gradle can resolve external dependencies from one or many repositories based on Maven, Ivy or flat directory directory formats. Check out the full reference on all types of repositories for more information.

Declaring a publicly-available repository

Organizations building software may want to leverage public binary repositories to download and consume open source dependencies. Popular public repositories include Maven Central, Bintray JCenter and the Google Android repository. Gradle provides built-in shortcut methods for the most widely-used repositories.

Figure 11. Declaring a repository with the help of shortcut methods

Declaring a repository with the help of shortcut methods

To declare JCenter as repository, add this code to your build script:

Example 282. Declaring JCenter repository as source for resolving dependencies

build.gradle

repositories {
    jcenter()
}

Under the covers Gradle resolves dependencies from the respective URL of the public repository defined by the shortcut method. All shortcut methods are available via the RepositoryHandler API. Alternatively, you can spell out the URL of the repository for more fine-grained control.

Declaring a custom repository by URL

Most enterprise projects set up a binary repository available only within an intranet. In-house repositories enable teams to publish internal binaries, setup user management and security measure and ensure uptime and availability. Specifying a custom URL is also helpful if you want to declare a less popular, but publicly-available repository.

Add the following code to declare an in-house repository for your build reachable through a custom URL.

Example 283. Declaring a custom repository by URL

build.gradle

repositories {
    maven {
        url "http://repo.mycompany.com/maven2"
    }
}

Repositories with custom URLs can be specified as Maven or Ivy repositories by calling the corresponding methods available on the RepositoryHandler API. Gradle supports other protocols than http or https as part of the custom URL e.g. file, sftp or s3. For a full coverage see the reference manual on supported transport protocols.

Declaring multiple repositories

You can define more than one repository for resolving dependencies. Declaring multiple repositories is helpful if some dependencies are only available in one repository but not the other. You can mix any type of repository described in the reference section.

This example demonstrates how to declare various shortcut and custom URL repositories for a project:

Example 284. Declaring multiple repositories

build.gradle

repositories {
    jcenter()
    maven {
        url "https://maven.springframework.org/release"
    }
    maven {
        url "https://maven.restlet.org"
    }
}

The order of declaration determines how Gradle will check for dependencies at runtime. If Gradle finds a module descriptor in a particular repository, it will attempt to download all of the artifacts for that module from the same repository. You can learn more about Gradle’s resolution mechanism in the dedicated section.

Inspecting Dependencies

Gradle provides sufficient tooling to navigate large dependency graphs and mitigate situations that can lead to dependency hell. Users can chose to render the full graph of dependencies as well as identify the selection reason and origin for a dependency. The origin of a dependency can be a declared dependency in the build script or a transitive dependency in graph plus their corresponding configuration. Gradle offers both capabilities through visual representation via build scans and as command line tooling.

Listing dependencies in a project

A project can declare one or more dependencies. Gradle can visualize the whole dependency tree for every configuration available in the project.

Rendering the dependency tree is particularly useful if you’d like to identify which dependencies have been resolved at runtime. It also provides you with information about any dependency conflict resolution that occurred in the process and clearly indicates the selected version. The dependency report always contains declared and transitive dependencies.

Let’s say you’d want to create tasks for your project that use the JGit library to execute SCM operations e.g. to model a release process. You can declare dependencies for any external tooling with the help of a custom configuration so that it doesn’t doesn’t pollute other contexts like the compilation classpath for your production source code.

Example 285. Declaring the JGit dependency with a custom configuration

build.gradle

repositories {
    jcenter()
}

configurations {
    scm
}

dependencies {
    scm 'org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r'
}

A build scan can visualize dependencies as a navigable, searchable tree. Additional context information can be rendered by clicking on a specific dependency in the graph.

Figure 12. Dependency tree in a build scan

Dependency tree in a build scan

Every Gradle project provides the task dependencies to render the so-called dependency report from the command line. By default the dependency report renders dependencies for all configurations. To pair down on the information provide the optional parameter --configuration.

Example 286. Rendering the dependency report for a custom configuration

Output of gradle -q dependencies --configuration scm

> gradle -q dependencies --configuration scm

------------------------------------------------------------
Root project
------------------------------------------------------------

scm
\--- org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r
     +--- com.jcraft:jsch:0.1.54
     +--- com.googlecode.javaewah:JavaEWAH:1.1.6
     +--- org.apache.httpcomponents:httpclient:4.3.6
     |    +--- org.apache.httpcomponents:httpcore:4.3.3
     |    +--- commons-logging:commons-logging:1.1.3
     |    \--- commons-codec:commons-codec:1.6
     \--- org.slf4j:slf4j-api:1.7.2

The dependencies report provides detailed information about the dependencies available in the graph. Any dependency that could not be resolved is marked with FAILED in red color. Dependencies with the same coordinates that can occur multiple times in the graph are omitted and indicated by an asterisk. Dependencies that had to undergo conflict resolution render the requested and selected version separated by a right arrow character.

Identifying which dependency version was selected and why

Large software projects inevitably deal with an increased number of dependencies either through direct or transitive dependencies. The dependencies report provides you with the raw list of dependencies but does not explain why they have been selected or which dependency is responsible for pulling them into the graph.

Let’s have a look at a concrete example. A project may request two different versions of the same dependency either as direct or transitive dependency. Gradle applies version conflict resolution to ensure that only one version of the dependency exists in the dependency graph. In this example the conflicting dependency is represented by commons-codec:commons-codec.

Example 287. Declaring the JGit dependency and a conflicting dependency

build.gradle

repositories {
    jcenter()
}

configurations {
    scm
}

dependencies {
    scm 'org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r'
    scm 'commons-codec:commons-codec:1.7'
}

The dependency tree in a build scan renders the selection reason (conflict resolution) as well as the origin of a dependency if you click on a dependency and select the "Required By" tab.

Figure 13. Dependency insight capabilities in a build scan

Dependency insight capabilities in a build scan

Every Gradle project provides the task dependencyInsight to render the so-called dependency insight report from the command line. Given a dependency in the dependency graph you can identify the selection reason and track down the origin of the dependency selection. You can think of the dependency insight report as the inverse representation of the dependency report for a given dependency. When executing the task you have to provide the mandatory parameter --dependency to specify the coordinates of the dependency under inspection. The parameter --configuration is optional but helps with filtering the output.

Example 288. Using the dependency insight report for a given dependency

Output of gradle -q dependencyInsight --dependency commons-codec --configuration scm

> gradle -q dependencyInsight --dependency commons-codec --configuration scm
commons-codec:commons-codec:1.7 (conflict resolution)
\--- scm

commons-codec:commons-codec:1.6 -> 1.7
\--- org.apache.httpcomponents:httpclient:4.3.6
     \--- org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r
          \--- scm

Managing Transitive Dependencies

Resolution behavior for transitive dependencies can be customized to a high degree to meet enterprise requirements.

Excluding transitive module dependencies

Declared dependencies in a build script can pull in a lot of transitive dependencies. You might decide that you do not want a particular transitive dependency as part of the dependency graph for a good reason.

  • The dependency is undesired due to licensing constraints.

  • The dependency is not available in any of the declared repositories.

  • The metadata for the dependency exists but the artifact does not.

  • The metadata provides incorrect coordinates for a transitive dependency.

Transitive dependencies can be excluded on the level of a declared dependency or a configuration. Let’s demonstrate both use cases. In the following two examples the build script declares a dependency on Log4J, a popular logging framework in the Java world. The metadata of the particular version of Log4J also defines transitive dependencies.

Example 289. Unresolved artifacts for transitive dependencies

build.gradle

apply plugin: 'java'

repositories {
    mavenCentral()
}

dependencies {
    implementation 'log4j:log4j:1.2.15'
}

If resolved from Maven Central some of the transitive dependencies provide metadata but not the corresponding binary artifact. As a result any task requiring the binary files will fail e.g. a compilation task.

> gradle -q compileJava

* What went wrong:
Could not resolve all files for configuration ':compileClasspath'.
> Could not find jms.jar (javax.jms:jms:1.1).
  Searched in the following locations:
      https://repo.maven.apache.org/maven2/javax/jms/jms/1.1/jms-1.1.jar
> Could not find jmxtools.jar (com.sun.jdmk:jmxtools:1.2.1).
  Searched in the following locations:
      https://repo.maven.apache.org/maven2/com/sun/jdmk/jmxtools/1.2.1/jmxtools-1.2.1.jar
> Could not find jmxri.jar (com.sun.jmx:jmxri:1.2.1).
  Searched in the following locations:
      https://repo.maven.apache.org/maven2/com/sun/jmx/jmxri/1.2.1/jmxri-1.2.1.jar

The situation can be fixed by adding a repository containing those dependencies. In the given example project, the source code does not actually use any of Log4J’s functionality that require the JMS (e.g. JMSAppender) or JMX libraries. It’s safe to exclude them from the dependency declaration.

Exclusions need to spelled out as a key/value pair via the attributes group and/or module. For more information, refer to ModuleDependency.exclude(java.util.Map).

Example 290. Excluding transitive dependency for a particular dependency declaration

build.gradle

dependencies {
    implementation('log4j:log4j:1.2.15') {
        exclude group: 'javax.jms', module: 'jms'
        exclude group: 'com.sun.jdmk', module: 'jmxtools'
        exclude group: 'com.sun.jmx', module: 'jmxri'
    }
}

You may find that other dependencies will want to pull in the same transitive dependency that misses the artifacts. Alternatively, you can exclude the transitive dependencies for a particular configuration by calling the method Configuration.exclude(java.util.Map).

Example 291. Excluding transitive dependency for a particular configuration

build.gradle

configurations {
    implementation {
        exclude group: 'javax.jms', module: 'jms'
        exclude group: 'com.sun.jdmk', module: 'jmxtools'
        exclude group: 'com.sun.jmx', module: 'jmxri'
    }
}

dependencies {
    implementation 'log4j:log4j:1.2.15'
}

As a build script author you often times know that you want to exclude a dependency for all configurations available in the project. You can use the method DomainObjectCollection.all(org.gradle.api.Action) to define a global rule.

You might encounter other use cases that don’t quite fit the bill of an exclude rule. For example you want to automatically select a version for a dependency with a specific requested version or you want to select a different group for a requested dependency to react to a relocation. Those use cases are better solved by the ResolutionStrategy API. Some of these use cases are covered in Customizing Dependency Resolution Behavior.

Enforcing a particular dependency version

Gradle resolves any dependency version conflicts by selecting the latest version found in the dependency graph. Some projects might need to divert from the default behavior and enforce an earlier version of a dependency e.g. if the source code of the project depends on an older API of a dependency than some of the external libraries.

Enforcing a version of a dependency requires a conscious decision. Changing the version of a transitive dependency might lead to runtime errors if external libraries do not properly function without them. Consider upgrading your source code to use a newer version of the library as an alternative approach.

Let’s say a project uses the HttpClient library for performing HTTP calls. HttpClient pulls in Commons Codec as transitive dependency with version 1.10. However, the production source code of the project requires an API from Commons Codec 1.9 which is not available in 1.10 anymore. A dependency version can be enforced by declaring it in the build script and setting ExternalDependency.setForce(boolean) to true.

Example 292. Enforcing a dependency version

build.gradle

dependencies {
    implementation 'org.apache.httpcomponents:httpclient:4.5.4'
    implementation('commons-codec:commons-codec:1.9') {
        force = true
    }
}

If the project requires a specific version of a dependency on a configuration-level then it can be achieved by calling the method ResolutionStrategy.force(java.lang.Object[]).

Example 293. Enforcing a dependency version on the configuration-level

build.gradle

configurations {
    compileClasspath {
        resolutionStrategy.force 'commons-codec:commons-codec:1.9'
    }
}

dependencies {
    implementation 'org.apache.httpcomponents:httpclient:4.5.4'
}

Disabling resolution of transitive dependencies

By default Gradle resolves all transitive dependencies specified by the dependency metadata. Sometimes this behavior may not be desirable e.g. if the metadata is incorrect or defines a large graph of transitive dependencies. You can tell Gradle to disable transitive dependency management for a dependency by setting ModuleDependency.setTransitive(boolean) to true. As a result only the main artifact will be resolved for the declared dependency.

Example 294. Disabling transitive dependency resolution for a declared dependency

build.gradle

dependencies {
    implementation('com.google.guava:guava:23.0') {
        transitive = false
    }
}

Disabling transitive dependency resolution will likely require you to declare the necessary runtime dependencies in your build script which otherwise would have been resolved automatically. Not doing so might lead to runtime classpath issues.

A project can decide to disable transitive dependency resolution completely. You either don’t want to rely on the metadata published to the consumed repositories or you want to gain full control over the dependencies in your graph. For more information, see Configuration.setTransitive(boolean).

Example 295. Disabling transitive dependency resolution on the configuration-level

build.gradle

configurations.all {
    transitive = false
}

dependencies {
    implementation 'com.google.guava:guava:23.0'
}

Working with Dependencies

For the examples below we have the following dependencies setup:

Example 296. Configuration.copy

build.gradle

configurations {
    sealife
    alllife
}

dependencies {
    sealife "sea.mammals:orca:1.0", "sea.fish:shark:1.0", "sea.fish:tuna:1.0"
    alllife configurations.sealife
    alllife "air.birds:albatross:1.0"
}

The dependencies have the following transitive dependencies:

shark-1.0 -> seal-2.0, tuna-1.0

orca-1.0 -> seal-1.0

tuna-1.0 -> herring-1.0

You can use the configuration to access the declared dependencies or a subset of those:

Example 297. Accessing declared dependencies

build.gradle

task dependencies {
    doLast {
        configurations.alllife.dependencies.each { dep -> println dep.name }
        println()
        configurations.alllife.allDependencies.each { dep -> println dep.name }
        println()
        configurations.alllife.allDependencies.findAll { dep -> dep.name != 'orca' }
            .each { dep -> println dep.name }
    }
}

Output of gradle -q dependencies

> gradle -q dependencies
albatross

albatross
orca
shark
tuna

albatross
shark
tuna

The dependencies task returns only the dependencies belonging explicitly to the configuration. The allDependencies task includes the dependencies from extended configurations.

To get the library files of the configuration dependencies you can do:

Example 298. Configuration.files

build.gradle

task allFiles {
    doLast {
        configurations.sealife.files.each { file ->
            println file.name
        }
    }
}

Output of gradle -q allFiles

> gradle -q allFiles
orca-1.0.jar
shark-1.0.jar
tuna-1.0.jar
seal-2.0.jar
herring-1.0.jar

Sometimes you want the library files of a subset of the configuration dependencies (e.g. of a single dependency).

Example 299. Configuration.files with spec

build.gradle

task files {
    doLast {
        configurations.sealife.files { dep -> dep.name == 'orca' }.each { file ->
            println file.name
        }
    }
}

Output of gradle -q files

> gradle -q files
orca-1.0.jar
seal-2.0.jar

The Configuration.files method always retrieves all artifacts of the whole configuration. It then filters the retrieved files by specified dependencies. As you can see in the example, transitive dependencies are included.

You can also copy a configuration. You can optionally specify that only a subset of dependencies from the original configuration should be copied. The copying methods come in two flavors. The copy method copies only the dependencies belonging explicitly to the configuration. The copyRecursive method copies all the dependencies, including the dependencies from extended configurations.

Example 300. Configuration.copy

build.gradle

task copy {
    doLast {
        configurations.alllife.copyRecursive { dep -> dep.name != 'orca' }
            .allDependencies.each { dep -> println dep.name }
        println()
        configurations.alllife.copy().allDependencies
            .each { dep -> println dep.name }
    }
}

Output of gradle -q copy

> gradle -q copy
albatross
shark
tuna

albatross

It is important to note that the returned files of the copied configuration are often but not always the same than the returned files of the dependency subset of the original configuration. In case of version conflicts between dependencies of the subset and dependencies not belonging to the subset the resolve result might be different.

Example 301. Configuration.copy vs. Configuration.files

build.gradle

task copyVsFiles {
    doLast {
        configurations.sealife.copyRecursive { dep -> dep.name == 'orca' }
            .each { file -> println file.name }
        println()
        configurations.sealife.files { dep -> dep.name == 'orca' }
            .each { file -> println file.name }
    }
}

Output of gradle -q copyVsFiles

> gradle -q copyVsFiles
orca-1.0.jar
seal-1.0.jar

orca-1.0.jar
seal-2.0.jar

In the example above, orca has a dependency on seal-1.0 whereas shark has a dependency on seal-2.0. The original configuration has therefore a version conflict which is resolved to the newer seal-2.0 version. The files method therefore returns seal-2.0 as a transitive dependency of orca. The copied configuration only has orca as a dependency and therefore there is no version conflict and seal-1.0 is returned as a transitive dependency.

Once a configuration is resolved it is immutable. Changing its state or the state of one of its dependencies will cause an exception. You can always copy a resolved configuration. The copied configuration is in the unresolved state and can be freshly resolved.

To learn more about the API of the configuration class see the API documentation: Configuration.

Customizing Dependency Resolution Behavior

In most cases, Gradle’s default dependency management will resolve the dependencies that you want in your build. In some cases, however, it can be necessary to tweak dependency resolution to ensure that your build receives exactly the right dependencies.

There are a number of ways that you can influence how Gradle resolves dependencies.

Using dependency resolve rules

A dependency resolve rule is executed for each resolved dependency, and offers a powerful api for manipulating a requested dependency prior to that dependency being resolved. This feature is incubating, but currently offers the ability to change the group, name and/or version of a requested dependency, allowing a dependency to be substituted with a completely different module during resolution.

Dependency resolve rules provide a very powerful way to control the dependency resolution process, and can be used to implement all sorts of advanced patterns in dependency management. Some of these patterns are outlined below. For more information and code samples see the ResolutionStrategy class in the API documentation.

Modelling releasable units

Often an organisation publishes a set of libraries with a single version; where the libraries are built, tested and published together. These libraries form a 'releasable unit', designed and intended to be used as a whole. It does not make sense to use libraries from different releasable units together.

But it is easy for transitive dependency resolution to violate this contract. For example:

  • module-a depends on releasable-unit:part-one:1.0

  • module-b depends on releasable-unit:part-two:1.1

A build depending on both module-a and module-b will obtain different versions of libraries within the releasable unit.

Dependency resolve rules give you the power to enforce releasable units in your build. Imagine a releasable unit defined by all libraries that have 'org.gradle' group. We can force all of these libraries to use a consistent version:

Example 302. Forcing consistent version for a group of libraries

build.gradle

configurations.all {
    resolutionStrategy.eachDependency { DependencyResolveDetails details ->
        if (details.requested.group == 'org.gradle') {
            details.useVersion '1.4'
        }
    }
}

Implementing a custom versioning scheme

In some corporate environments, the list of module versions that can be declared in Gradle builds is maintained and audited externally. Dependency resolve rules provide a neat implementation of this pattern:

  • In the build script, the developer declares dependencies with the module group and name, but uses a placeholder version, for example: 'default'.

  • The 'default' version is resolved to a specific version via a dependency resolve rule, which looks up the version in a corporate catalog of approved modules.

This rule implementation can be neatly encapsulated in a corporate plugin, and shared across all builds within the organisation.

Example 303. Using a custom versioning scheme

build.gradle

configurations.all {
    resolutionStrategy.eachDependency { DependencyResolveDetails details ->
        if (details.requested.version == 'default') {
            def version = findDefaultVersionInCatalog(details.requested.group, details.requested.name)
            details.useVersion version
        }
    }
}

def findDefaultVersionInCatalog(String group, String name) {
    //some custom logic that resolves the default version into a specific version
    "1.0"
}

Blacklisting a particular version with a replacement

Dependency resolve rules provide a mechanism for blacklisting a particular version of a dependency and providing a replacement version. This can be useful if a certain dependency version is broken and should not be used, where a dependency resolve rule causes this version to be replaced with a known good version. One example of a broken module is one that declares a dependency on a library that cannot be found in any of the public repositories, but there are many other reasons why a particular module version is unwanted and a different version is preferred.

In example below, imagine that version 1.2.1 contains important fixes and should always be used in preference to 1.2. The rule provided will enforce just this: any time version 1.2 is encountered it will be replaced with 1.2.1. Note that this is different from a forced version as described above, in that any other versions of this module would not be affected. This means that the 'newest' conflict resolution strategy would still select version 1.3 if this version was also pulled transitively.

Example 304. Blacklisting a version with a replacement

build.gradle

configurations.all {
    resolutionStrategy.eachDependency { DependencyResolveDetails details ->
        if (details.requested.group == 'org.software' && details.requested.name == 'some-library' && details.requested.version == '1.2') {
            //prefer different version which contains some necessary fixes
            details.useVersion '1.2.1'
        }
    }
}

Substituting a dependency module with a compatible replacement

At times a completely different module can serve as a replacement for a requested module dependency. Examples include using 'groovy' in place of 'groovy-all', or using 'log4j-over-slf4j' instead of 'log4j'. Starting with Gradle 1.5 you can make these substitutions using dependency resolve rules:

Example 305. Changing dependency group and/or name at the resolution

build.gradle

configurations.all {
    resolutionStrategy.eachDependency { DependencyResolveDetails details ->
        if (details.requested.name == 'groovy-all') {
            //prefer 'groovy' over 'groovy-all':
            details.useTarget group: details.requested.group, name: 'groovy', version: details.requested.version
        }
        if (details.requested.name == 'log4j') {
            //prefer 'log4j-over-slf4j' over 'log4j', with fixed version:
            details.useTarget "org.slf4j:log4j-over-slf4j:1.7.10"
        }
    }
}

Dependency Substitution Rules

Dependency substitution rules work similarly to dependency resolve rules. In fact, many capabilities of dependency resolve rules can be implemented with dependency substitution rules. They allow project and module dependencies to be transparently substituted with specified replacements. Unlike dependency resolve rules, dependency substitution rules allow project and module dependencies to be substituted interchangeably.

Adding a dependency substitution rule to a configuration changes the timing of when that configuration is resolved. Instead of being resolved on first use, the configuration is instead resolved when the task graph is being constructed. This can have unexpected consequences if the configuration is being further modified during task execution, or if the configuration relies on modules that are published during execution of another task.

To explain:

  • A Configuration can be declared as an input to any Task, and that configuration can include project dependencies when it is resolved.

  • If a project dependency is an input to a Task (via a configuration), then tasks to build the project artifacts must be added to the task dependencies.

  • In order to determine the project dependencies that are inputs to a task, Gradle needs to resolve the Configuration inputs.

  • Because the Gradle task graph is fixed once task execution has commenced, Gradle needs to perform this resolution prior to executing any tasks.

In the absence of dependency substitution rules, Gradle knows that an external module dependency will never transitively reference a project dependency. This makes it easy to determine the full set of project dependencies for a configuration through simple graph traversal. With this functionality, Gradle can no longer make this assumption, and must perform a full resolve in order to determine the project dependencies.

Substituting an external module dependency with a project dependency

One use case for dependency substitution is to use a locally developed version of a module in place of one that is downloaded from an external repository. This could be useful for testing a local, patched version of a dependency.

The module to be replaced can be declared with or without a version specified.

Example 306. Substituting a module with a project

build.gradle

configurations.all {
    resolutionStrategy.dependencySubstitution {
        substitute module("org.utils:api") with project(":api")
        substitute module("org.utils:util:2.5") with project(":util")
    }
}

Note that a project that is substituted must be included in the multi-project build (via settings.gradle). Dependency substitution rules take care of replacing the module dependency with the project dependency and wiring up any task dependencies, but do not implicitly include the project in the build.

Substituting a project dependency with a module replacement

Another way to use substitution rules is to replace a project dependency with a module in a multi-project build. This can be useful to speed up development with a large multi-project build, by allowing a subset of the project dependencies to be downloaded from a repository rather than being built.

The module to be used as a replacement must be declared with a version specified.

Example 307. Substituting a project with a module

build.gradle

configurations.all {
    resolutionStrategy.dependencySubstitution {
        substitute project(":api") with module("org.utils:api:1.3")
    }
}

When a project dependency has been replaced with a module dependency, that project is still included in the overall multi-project build. However, tasks to build the replaced dependency will not be executed in order to build the resolve the depending Configuration.

Conditionally substituting a dependency

A common use case for dependency substitution is to allow more flexible assembly of sub-projects within a multi-project build. This can be useful for developing a local, patched version of an external dependency or for building a subset of the modules within a large multi-project build.

The following example uses a dependency substitution rule to replace any module dependency with the group "org.example", but only if a local project matching the dependency name can be located.

Example 308. Conditionally substituting a dependency

build.gradle

configurations.all {
    resolutionStrategy.dependencySubstitution.all { DependencySubstitution dependency ->
        if (dependency.requested instanceof ModuleComponentSelector && dependency.requested.group == "org.example") {
            def targetProject = findProject(":${dependency.requested.module}")
            if (targetProject != null) {
                dependency.useTarget targetProject
            }
        }
    }
}

Note that a project that is substituted must be included in the multi-project build (via settings.gradle). Dependency substitution rules take care of replacing the module dependency with the project dependency, but do not implicitly include the project in the build.

Specifying default dependencies for a configuration

A configuration can be configured with default dependencies to be used if no dependencies are explicitly set for the configuration. A primary use case of this functionality is for developing plugins that make use of versioned tools that the user might override. By specifying default dependencies, the plugin can use a default version of the tool only if the user has not specified a particular version to use.

Example 309. Specifying default dependencies on a configuration

build.gradle

configurations {
    pluginTool {
        defaultDependencies { dependencies ->
            dependencies.add(this.project.dependencies.create("org.gradle:my-util:1.0"))
        }
    }
}

Enabling Ivy dynamic resolve mode

Gradle’s Ivy repository implementations support the equivalent to Ivy’s dynamic resolve mode. Normally, Gradle will use the rev attribute for each dependency definition included in an ivy.xml file. In dynamic resolve mode, Gradle will instead prefer the revConstraint attribute over the rev attribute for a given dependency definition. If the revConstraint attribute is not present, the rev attribute is used instead.

To enable dynamic resolve mode, you need to set the appropriate option on the repository definition. A couple of examples are shown below. Note that dynamic resolve mode is only available for Gradle’s Ivy repositories. It is not available for Maven repositories, or custom Ivy DependencyResolver implementations.

Example 310. Enabling dynamic resolve mode

build.gradle

// Can enable dynamic resolve mode when you define the repository
repositories {
    ivy {
        url "http://repo.mycompany.com/repo"
        resolve.dynamicMode = true
    }
}

// Can use a rule instead to enable (or disable) dynamic resolve mode for all repositories
repositories.withType(IvyArtifactRepository) {
    resolve.dynamicMode = true
}

Component metadata rules

Each module (also called component) has metadata associated with it, such as its group, name, version, dependencies, and so on. This metadata typically originates in the module’s descriptor. Metadata rules allow certain parts of a module’s metadata to be manipulated from within the build script. They take effect after a module’s descriptor has been downloaded, but before it has been selected among all candidate versions. This makes metadata rules another instrument for customizing dependency resolution.

One piece of module metadata that Gradle understands is a module’s status scheme. This concept, also known from Ivy, models the different levels of maturity that a module transitions through over time. The default status scheme, ordered from least to most mature status, is integration, milestone, release. Apart from a status scheme, a module also has a (current) status, which must be one of the values in its status scheme. If not specified in the (Ivy) descriptor, the status defaults to integration for Ivy modules and Maven snapshot modules, and release for Maven modules that aren’t snapshots.

A module’s status and status scheme are taken into consideration when a latest version selector is resolved. Specifically, latest.someStatus will resolve to the highest module version that has status someStatus or a more mature status. For example, with the default status scheme in place, latest.integration will select the highest module version regardless of its status (because integration is the least mature status), whereas latest.release will select the highest module version with status release. Here is what this looks like in code:

Example 311. 'Latest' version selector

build.gradle

dependencies {
    config1 "org.sample:client:latest.integration"
    config2 "org.sample:client:latest.release"
}

task listConfigs {
    doLast {
        configurations.config1.each { println it.name }
        println()
        configurations.config2.each { println it.name }
    }
}

Output of gradle -q listConfigs

> gradle -q listConfigs
client-1.5.jar

client-1.4.jar

The next example demonstrates latest selectors based on a custom status scheme declared in a component metadata rule that applies to all modules:

Example 312. Custom status scheme

build.gradle

dependencies {
    config3 "org.sample:api:latest.silver"
    components {
        all { ComponentMetadataDetails details ->
            if (details.id.group == "org.sample" && details.id.name == "api") {
                details.statusScheme = ["bronze", "silver", "gold", "platinum"]
            }
        }
    }
}

Component metadata rules can be applied to a specified module. Modules must be specified in the form of "group:module".

Example 313. Custom status scheme by module

build.gradle

dependencies {
    config4 "org.sample:lib:latest.prod"
    components {
        withModule('org.sample:lib') { ComponentMetadataDetails details ->
            details.statusScheme = ["int", "rc", "prod"]
        }
    }
}

Gradle can also create component metadata rules utilizing Ivy-specific metadata for modules resolved from an Ivy repository. Values from the Ivy descriptor are made available via the IvyModuleDescriptor interface.

Example 314. Ivy component metadata rule

build.gradle

dependencies {
    config6 "org.sample:lib:latest.rc"
    components {
        withModule("org.sample:lib") { ComponentMetadataDetails details, IvyModuleDescriptor ivyModule ->
            if (ivyModule.branch == 'testing') {
                details.status = "rc"
            }
        }
    }
}

Note that any rule that declares specific arguments must always include a ComponentMetadataDetails argument as the first argument. The second Ivy metadata argument is optional.

Component metadata rules can also be defined using a rule source object. A rule source object is any object that contains exactly one method that defines the rule action and is annotated with @Mutate.

This method:

Example 315. Rule source component metadata rule

build.gradle

dependencies {
    config5 "org.sample:api:latest.gold"
    components {
        withModule('org.sample:api', new CustomStatusRule())
    }
}

class CustomStatusRule {
    @Mutate
    void setStatusScheme(ComponentMetadataDetails details) {
        details.statusScheme = ["bronze", "silver", "gold", "platinum"]
    }
}

Component Selection Rules

Component selection rules may influence which component instance should be selected when multiple versions are available that match a version selector. Rules are applied against every available version and allow the version to be explicitly rejected by rule. This allows Gradle to ignore any component instance that does not satisfy conditions set by the rule. Examples include:

  • For a dynamic version like '1.+' certain versions may be explicitly rejected from selection

  • For a static version like '1.4' an instance may be rejected based on extra component metadata such as the Ivy branch attribute, allowing an instance from a subsequent repository to be used.

Rules are configured via the ComponentSelectionRules object. Each rule configured will be called with a ComponentSelection object as an argument which contains information about the candidate version being considered. Calling ComponentSelection.reject(java.lang.String) causes the given candidate version to be explicitly rejected, in which case the candidate will not be considered for the selector.

The following example shows a rule that disallows a particular version of a module but allows the dynamic version to choose the next best candidate.

Example 316. Component selection rule

build.gradle

configurations {
    rejectConfig {
        resolutionStrategy {
            componentSelection {
                // Accept the highest version matching the requested version that isn't '1.5'
                all { ComponentSelection selection ->
                    if (selection.candidate.group == 'org.sample' && selection.candidate.module == 'api' && selection.candidate.version == '1.5') {
                        selection.reject("version 1.5 is broken for 'org.sample:api'")
                    }
                }
            }
        }
    }
}

dependencies {
    rejectConfig "org.sample:api:1.+"
}

Note that version selection is applied starting with the highest version first. The version selected will be the first version found that all component selection rules accept. A version is considered accepted no rule explicitly rejects it.

Similarly, rules can be targeted at specific modules. Modules must be specified in the form of "group:module".

Example 317. Component selection rule with module target

build.gradle

configurations {
    targetConfig {
        resolutionStrategy {
            componentSelection {
                withModule("org.sample:api") { ComponentSelection selection ->
                    if (selection.candidate.version == "1.5") {
                        selection.reject("version 1.5 is broken for 'org.sample:api'")
                    }
                }
            }
        }
    }
}

Component selection rules can also consider component metadata when selecting a version. Possible metadata arguments that can be considered are ComponentMetadata and IvyModuleDescriptor.

Example 318. Component selection rule with metadata

build.gradle

configurations {
    metadataRulesConfig {
        resolutionStrategy {
            componentSelection {
                // Reject any versions with a status of 'experimental'
                all { ComponentSelection selection, ComponentMetadata metadata ->
                    if (selection.candidate.group == 'org.sample' && metadata.status == 'experimental') {
                        selection.reject("don't use experimental candidates from 'org.sample'")
                    }
                }
                // Accept the highest version with either a "release" branch or a status of 'milestone'
                withModule('org.sample:api') { ComponentSelection selection, IvyModuleDescriptor descriptor, ComponentMetadata metadata ->
                    if (descriptor.branch != "release" && metadata.status != 'milestone') {
                        selection.reject("'org.sample:api' must have testing branch or milestone status")
                    }
                }
            }
        }
    }
}

Note that a ComponentSelection argument is always required as the first parameter when declaring a component selection rule with additional Ivy metadata parameters, but the metadata parameters can be declared in any order.

Lastly, component selection rules can also be defined using a rule source object. A rule source object is any object that contains exactly one method that defines the rule action and is annotated with @Mutate.

This method:

Example 319. Component selection rule using a rule source object

build.gradle

class RejectTestBranch {
    @Mutate
    void evaluateRule(ComponentSelection selection, IvyModuleDescriptor ivy) {
        if (ivy.branch == "test") {
            selection.reject("reject test branch")
        }
    }
}

configurations {
    ruleSourceConfig {
        resolutionStrategy {
            componentSelection {
                all new RejectTestBranch()
            }
        }
    }
}

Module replacement rules

Module replacement rules allow a build to declare that a legacy library has been replaced by a new one. A good example when a new library replaced a legacy one is the "google-collections" -> "guava" migration. The team that created google-collections decided to change the module name from "com.google.collections:google-collections" into "com.google.guava:guava". This is a legal scenario in the industry: teams need to be able to change the names of products they maintain, including the module coordinates. Renaming of the module coordinates has impact on conflict resolution.

To explain the impact on conflict resolution, let’s consider the "google-collections" -> "guava" scenario. It may happen that both libraries are pulled into the same dependency graph. For example, "our" project depends on guava but some of our dependencies pull in a legacy version of google-collections. This can cause runtime errors, for example during test or application execution. Gradle does not automatically resolve the google-collections VS guava conflict because it is not considered as a "version conflict". It’s because the module coordinates for both libraries are completely different and conflict resolution is activated when "group" and "name" coordinates are the same but there are different versions available in the dependency graph (for more info, refer to the section on conflict resolution). Traditional remedies to this problem are:

  • Declare exclusion rule to avoid pulling in "google-collections" to graph. It is probably the most popular approach.

  • Avoid dependencies that pull in legacy libraries.

  • Upgrade the dependency version if the new version no longer pulls in a legacy library.

  • Downgrade to "google-collections". It’s not recommended, just mentioned for completeness.

Traditional approaches work but they are not general enough. For example, an organisation wants to resolve the google-collections VS guava conflict resolution problem in all projects. Starting from Gradle 2.2 it is possible to declare that certain module was replaced by other. This enables organisations to include the information about module replacement in the corporate plugin suite and resolve the problem holistically for all Gradle-powered projects in the enterprise.

Example 320. Declaring module replacement

build.gradle

dependencies {
    modules {
        module("com.google.collections:google-collections") {
            replacedBy("com.google.guava:guava")
        }
    }
}

For more examples and detailed API, refer to the DSL reference for ComponentMetadataHandler.

What happens when we declare that "google-collections" are replaced by "guava"? Gradle can use this information for conflict resolution. Gradle will consider every version of "guava" newer/better than any version of "google-collections". Also, Gradle will ensure that only guava jar is present in the classpath / resolved file list. Note that if only "google-collections" appears in the dependency graph (e.g. no "guava") Gradle will not eagerly replace it with "guava". Module replacement is an information that Gradle uses for resolving conflicts. If there is no conflict (e.g. only "google-collections" or only "guava" in the graph) the replacement information is not used.

Currently it is not possible to declare that certain modules is replaced by a set of modules. However, it is possible to declare that multiple modules are replaced by a single module.

Troubleshooting Dependency Resolution

Managing large dependency graphs can be challenging. This section describes techniques for troubleshooting issues you might encounter in your project.

Putting the version in the filename (version the jar)

The version of a library must be part of the filename. While the version of a jar is usually in the Manifest file, it isn’t readily apparent when you are inspecting a project. If someone asks you to look at a collection of 20 jar files, which would you prefer? A collection of files with names like commons-beanutils-1.3.jar or a collection of files with names like spring.jar? If dependencies have file names with version numbers you can quickly identify the versions of your dependencies.

If versions are unclear you can introduce subtle bugs which are very hard to find. For example there might be a project which uses Hibernate 2.5. Think about a developer who decides to install version 3.0.5 of Hibernate on her machine to fix a critical security bug but forgets to notify others in the team of this change. She may address the security bug successfully, but she also may have introduced subtle bugs into a codebase that was using a now-deprecated feature from Hibernate. Weeks later there is an exception on the integration machine which can’t be reproduced on anyone’s machine. Multiple developers then spend days on this issue only finally realising that the error would have been easy to uncover if they knew that Hibernate had been upgraded from 2.5 to 3.0.5.

Versions in jar names increase the expressiveness of your project and make them easier to maintain. This practice also reduces the potential for error.

Resolving version conflicts

Conflicting versions of the same jar should be detected and either resolved or cause an exception. If you don’t use transitive dependency management, version conflicts are undetected and the often accidental order of the classpath will determine what version of a dependency will win. On a large project with many developers changing dependencies, successful builds will be few and far between as the order of dependencies may directly affect whether a build succeeds or fails (or whether a bug appears or disappears in production).

If you haven’t had to deal with the curse of conflicting versions of jars on a classpath, here is a small anecdote of the fun that awaits you. In a large project with 30 submodules, adding a dependency to a subproject changed the order of a classpath, swapping Spring 2.5 for an older 2.4 version. While the build continued to work, developers were starting to notice all sorts of surprising (and surprisingly awful) bugs in production. Worse yet, this unintentional downgrade of Spring introduced several security vulnerabilities into the system, which now required a full security audit throughout the organization.

In short, version conflicts are bad, and you should manage your transitive dependencies to avoid them. You might also want to learn where conflicting versions are used and consolidate on a particular version of a dependency across your organization. With a good conflict reporting tool like Gradle, that information can be used to communicate with the entire organization and standardize on a single version. If you think version conflicts don’t happen to you, think again. It is very common for different first-level dependencies to rely on a range of different overlapping versions for other dependencies, and the JVM doesn’t yet offer an easy way to have different versions of the same jar in the classpath (see the section called “Dependency management and Java”).

Gradle offers the following conflict resolution strategies:

  • Newest: The newest version of the dependency is used. This is Gradle’s default strategy, and is often an appropriate choice as long as versions are backwards-compatible.

  • Fail: A version conflict results in a build failure. This strategy requires all version conflicts to be resolved explicitly in the build script. See ResolutionStrategy for details on how to explicitly choose a particular version.

While the strategies introduced above are usually enough to solve most conflicts, Gradle provides more fine-grained mechanisms to resolve version conflicts:

  • Configuring a first level dependency as forced. This approach is useful if the dependency in conflict is already a first level dependency. See examples in DependencyHandler.

  • Configuring any dependency (transitive or not) as forced. This approach is useful if the dependency in conflict is a transitive dependency. It also can be used to force versions of first level dependencies. See examples in ResolutionStrategy.

  • Configuring dependency resolution to prefer modules that are part of your build (transitive or not). This approach is useful if your build contains custom forks of modules (as part of Authoring Multi-Project Builds or as include in Composite builds). See examples in ResolutionStrategy.

  • Dependency resolve rules are an incubating feature give you fine-grained control over the version selected for a particular dependency.

To deal with problems due to version conflicts, reports with dependency graphs are also very helpful. Such reports are another feature of dependency management.

Using dynamic versions and changing modules

There are many situations when you want to use the latest version of a particular dependency, or the latest in a range of versions. This can be a requirement during development, or you may be developing a library that is designed to work with a range of dependency versions. You can easily depend on these constantly changing dependencies by using a dynamic version. A dynamic version can be either a version range (e.g. 2.+) or it can be a placeholder for the latest version available (e.g. latest.integration).

Alternatively, sometimes the module you request can change over time, even for the same version. An example of this type of changing module is a Maven SNAPSHOT module, which always points at the latest artifact published. In other words, a standard Maven snapshot is a module that never stands still so to speak, it is a “changing module”.

The main difference between a dynamic version and a changing module is that when you resolve a dynamic version, you’ll get the real, static version as the module name. When you resolve a changing module, the artifacts are named using the version you requested, but the underlying artifacts may change over time.

By default, Gradle caches dynamic versions and changing modules for 24 hours. You can override the default cache modes using command line options. You can also change the cache expiry times in your build programmatically using the resolution strategy.

Controlling dependency caching programmatically

You can fine-tune certain aspects of caching using the ResolutionStrategy for a configuration.

By default, Gradle caches dynamic versions for 24 hours. To change how long Gradle will cache the resolved version for a dynamic version, use:

Example 321. Dynamic version cache control

build.gradle

configurations.all {
    resolutionStrategy.cacheDynamicVersionsFor 10, 'minutes'
}

By default, Gradle caches changing modules for 24 hours. To change how long Gradle will cache the meta-data and artifacts for a changing module, use:

Example 322. Changing module cache control

build.gradle

configurations.all {
    resolutionStrategy.cacheChangingModulesFor 4, 'hours'
}

For more details, take a look at the API documentation for ResolutionStrategy.

Controlling dependency caching from the command line

Avoiding network access with offline mode

The --offline command line switch tells Gradle to always use dependency modules from the cache, regardless if they are due to be checked again. When running with offline, Gradle will never attempt to access the network to perform dependency resolution. If required modules are not present in the dependency cache, build execution will fail.

Forcing all dependencies to be re-resolved

At times, the Gradle Dependency Cache can be out of sync with the actual state of the configured repositories. Perhaps a repository was initially misconfigured, or perhaps a “non-changing” module was published incorrectly. To refresh all dependencies in the dependency cache, use the --refresh-dependencies option on the command line.

The --refresh-dependencies option tells Gradle to ignore all cached entries for resolved modules and artifacts. A fresh resolve will be performed against all configured repositories, with dynamic versions recalculated, modules refreshed, and artifacts downloaded. However, where possible Gradle will check if the previously downloaded artifacts are valid before downloading again. This is done by comparing published SHA1 values in the repository with the SHA1 values for existing downloaded artifacts.

Extending the build

Writing Custom Task Classes

Gradle supports two types of task. One such type is the simple task, where you define the task with an action closure. We have seen these in Build Script Basics. For this type of task, the action closure determines the behaviour of the task. This type of task is good for implementing one-off tasks in your build script.

The other type of task is the enhanced task, where the behaviour is built into the task, and the task provides some properties which you can use to configure the behaviour. We have seen these in Authoring Tasks. Most Gradle plugins use enhanced tasks. With enhanced tasks, you don’t need to implement the task behaviour as you do with simple tasks. You simply declare the task and configure the task using its properties. In this way, enhanced tasks let you reuse a piece of behaviour in many different places, possibly across different builds.

The behaviour and properties of an enhanced task is defined by the task’s class. When you declare an enhanced task, you specify the type, or class of the task.

Implementing your own custom task class in Gradle is easy. You can implement a custom task class in pretty much any language you like, provided it ends up compiled to bytecode. In our examples, we are going to use Groovy as the implementation language. Groovy, Java or Kotlin are all good choices as the language to use to implement a task class, as the Gradle API has been designed to work well with these languages. In general, a task implemented using Java or Kotlin, which are statically typed, will perform better than the same task implemented using Groovy.

Packaging a task class

There are several places where you can put the source for the task class.

Build script

You can include the task class directly in the build script. This has the benefit that the task class is automatically compiled and included in the classpath of the build script without you having to do anything. However, the task class is not visible outside the build script, and so you cannot reuse the task class outside the build script it is defined in.

buildSrc project

You can put the source for the task class in the rootProjectDir/buildSrc/src/main/groovy directory. Gradle will take care of compiling and testing the task class and making it available on the classpath of the build script. The task class is visible to every build script used by the build. However, it is not visible outside the build, and so you cannot reuse the task class outside the build it is defined in. Using the buildSrc project approach separates the task declaration - that is, what the task should do - from the task implementation - that is, how the task does it.

See Organizing Build Logic for more details about the buildSrc project.

Standalone project

You can create a separate project for your task class. This project produces and publishes a JAR which you can then use in multiple builds and share with others. Generally, this JAR might include some custom plugins, or bundle several related task classes into a single library. Or some combination of the two.

In our examples, we will start with the task class in the build script, to keep things simple. Then we will look at creating a standalone project.

Writing a simple task class

To implement a custom task class, you extend DefaultTask.

Example 323. Defining a custom task

build.gradle

class GreetingTask extends DefaultTask {
}

This task doesn’t do anything useful, so let’s add some behaviour. To do so, we add a method to the task and mark it with the TaskAction annotation. Gradle will call the method when the task executes. You don’t have to use a method to define the behaviour for the task. You could, for instance, call doFirst() or doLast() with a closure in the task constructor to add behaviour.

Example 324. A hello world task

build.gradle

class GreetingTask extends DefaultTask {
    @TaskAction
    def greet() {
        println 'hello from GreetingTask'
    }
}

// Create a task using the task type
task hello(type: GreetingTask)

Output of gradle -q hello

> gradle -q hello
hello from GreetingTask

Let’s add a property to the task, so we can customize it. Tasks are simply POGOs, and when you declare a task, you can set the properties or call methods on the task object. Here we add a greeting property, and set the value when we declare the greeting task.

Example 325. A customizable hello world task

build.gradle

class GreetingTask extends DefaultTask {
    String greeting = 'hello from GreetingTask'

    @TaskAction
    def greet() {
        println greeting
    }
}

// Use the default greeting
task hello(type: GreetingTask)

// Customize the greeting
task greeting(type: GreetingTask) {
    greeting = 'greetings from GreetingTask'
}

Output of gradle -q hello greeting

> gradle -q hello greeting
hello from GreetingTask
greetings from GreetingTask

A standalone project

Now we will move our task to a standalone project, so we can publish it and share it with others. This project is simply a Groovy project that produces a JAR containing the task class. Here is a simple build script for the project. It applies the Groovy plugin, and adds the Gradle API as a compile-time dependency.

Example 326. A build for a custom task

build.gradle

apply plugin: 'groovy'

dependencies {
    compile gradleApi()
    compile localGroovy()
}

Note: The code for this example can be found at samples/customPlugin/plugin in the ‘-all’ distribution of Gradle.


We just follow the convention for where the source for the task class should go.

Example 327. A custom task

src/main/groovy/org/gradle/GreetingTask.groovy

package org.gradle

import org.gradle.api.DefaultTask
import org.gradle.api.tasks.TaskAction

class GreetingTask extends DefaultTask {
    String greeting = 'hello from GreetingTask'

    @TaskAction
    def greet() {
        println greeting
    }
}

Using your task class in another project

To use a task class in a build script, you need to add the class to the build script’s classpath. To do this, you use a buildscript { } block, as described in the section called “External dependencies for the build script”. The following example shows how you might do this when the JAR containing the task class has been published to a local repository:

Example 328. Using a custom task in another project

build.gradle

buildscript {
    repositories {
        maven {
            url uri('../repo')
        }
    }
    dependencies {
        classpath group: 'org.gradle', name: 'customPlugin',
                  version: '1.0-SNAPSHOT'
    }
}

task greeting(type: org.gradle.GreetingTask) {
    greeting = 'howdy!'
}

Writing tests for your task class

You can use the ProjectBuilder class to create Project instances to use when you test your task class.

Example 329. Testing a custom task

src/test/groovy/org/gradle/GreetingTaskTest.groovy

class GreetingTaskTest {
    @Test
    public void canAddTaskToProject() {
        Project project = ProjectBuilder.builder().build()
        def task = project.task('greeting', type: GreetingTask)
        assertTrue(task instanceof GreetingTask)
    }
}

Incremental tasks

Incremental tasks are an incubating feature.

Since the introduction of the implementation described above (early in the Gradle 1.6 release cycle), discussions within the Gradle community have produced superior ideas for exposing the information about changes to task implementors to what is described below. As such, the API for this feature will almost certainly change in upcoming releases. However, please do experiment with the current implementation and share your experiences with the Gradle community.

The feature incubation process, which is part of the Gradle feature lifecycle (see Appendix C), exists for this purpose of ensuring high quality final implementations through incorporation of early user feedback.

With Gradle, it’s very simple to implement a task that is skipped when all of its inputs and outputs are up to date (see the section called “Up-to-date checks (AKA Incremental Build)”). However, there are times when only a few input files have changed since the last execution, and you’d like to avoid reprocessing all of the unchanged inputs. This can be particularly useful for a transformer task, that converts input files to output files on a 1:1 basis.

If you’d like to optimise your build so that only out-of-date inputs are processed, you can do so with an incremental task.

Implementing an incremental task

For a task to process inputs incrementally, that task must contain an incremental task action. This is a task action method that contains a single IncrementalTaskInputs parameter, which indicates to Gradle that the action will process the changed inputs only.

The incremental task action may supply an IncrementalTaskInputs.outOfDate(org.gradle.api.Action) action for processing any input file that is out-of-date, and a IncrementalTaskInputs.removed(org.gradle.api.Action) action that executes for any input file that has been removed since the previous execution.

Example 330. Defining an incremental task action

build.gradle

class IncrementalReverseTask extends DefaultTask {
    @InputDirectory
    def File inputDir

    @OutputDirectory
    def File outputDir

    @Input
    def inputProperty

    @TaskAction
    void execute(IncrementalTaskInputs inputs) {
        println inputs.incremental ? 'CHANGED inputs considered out of date'
                                   : 'ALL inputs considered out of date'
        if (!inputs.incremental)
            project.delete(outputDir.listFiles())

        inputs.outOfDate { change ->
            println "out of date: ${change.file.name}"
            def targetFile = new File(outputDir, change.file.name)
            targetFile.text = change.file.text.reverse()
        }

        inputs.removed { change ->
            println "removed: ${change.file.name}"
            def targetFile = new File(outputDir, change.file.name)
            targetFile.delete()
        }
    }
}

Note: The code for this example can be found at samples/userguide/tasks/incrementalTask in the ‘-all’ distribution of Gradle.


If for some reason the task is not run incremental, e.g. by running with --rerun-tasks, only the outOfDate action is executed, even if there were deleted input files. You should consider handling this case at the beginning, as is done in the example above.

For a simple transformer task like this, the task action simply needs to generate output files for any out-of-date inputs, and delete output files for any removed inputs.

A task may only contain a single incremental task action.

Which inputs are considered out of date?

When Gradle has history of a previous task execution, and the only changes to the task execution context since that execution are to input files, then Gradle is able to determine which input files need to be reprocessed by the task. In this case, the IncrementalTaskInputs.outOfDate(org.gradle.api.Action) action will be executed for any input file that was added or modified, and the IncrementalTaskInputs.removed(org.gradle.api.Action) action will be executed for any removed input file.

However, there are many cases where Gradle is unable to determine which input files need to be reprocessed. Examples include:

  • There is no history available from a previous execution.

  • You are building with a different version of Gradle. Currently, Gradle does not use task history from a different version.

  • An upToDateWhen criteria added to the task returns false.

  • An input property has changed since the previous execution.

  • One or more output files have changed since the previous execution.

In any of these cases, Gradle will consider all of the input files to be outOfDate. The IncrementalTaskInputs.outOfDate(org.gradle.api.Action) action will be executed for every input file, and the IncrementalTaskInputs.removed(org.gradle.api.Action) action will not be executed at all.

You can check if Gradle was able to determine the incremental changes to input files with IncrementalTaskInputs.isIncremental().

An incremental task in action

Given the incremental task implementation above, we can explore the various change scenarios by example. Note that the various mutation tasks ('updateInputs', 'removeInput', etc) are only present for demonstration purposes: these would not normally be part of your build script.

First, consider the IncrementalReverseTask executed against a set of inputs for the first time. In this case, all inputs will be considered “out of date”:

Example 331. Running the incremental task for the first time

build.gradle

task incrementalReverse(type: IncrementalReverseTask) {
    inputDir = file('inputs')
    outputDir = file("$buildDir/outputs")
    inputProperty = project.properties['taskInputProperty'] ?: 'original'
}

Build layout

incrementalTask/
  build.gradle
  inputs/
    1.txt
    2.txt
    3.txt

Output of gradle -q incrementalReverse

> gradle -q incrementalReverse
ALL inputs considered out of date
out of date: 1.txt
out of date: 2.txt
out of date: 3.txt

Naturally when the task is executed again with no changes, then the entire task is up to date and no files are reported to the task action:

Example 332. Running the incremental task with unchanged inputs

Output of gradle -q incrementalReverse

> gradle -q incrementalReverse

When an input file is modified in some way or a new input file is added, then re-executing the task results in those files being reported to IncrementalTaskInputs.outOfDate(org.gradle.api.Action):

Example 333. Running the incremental task with updated input files

build.gradle

task updateInputs() {
    doLast {
        file('inputs/1.txt').text = 'Changed content for existing file 1.'
        file('inputs/4.txt').text = 'Content for new file 4.'
    }
}

Output of gradle -q updateInputs incrementalReverse

> gradle -q updateInputs incrementalReverse
CHANGED inputs considered out of date
out of date: 1.txt
out of date: 4.txt

When an existing input file is removed, then re-executing the task results in that file being reported to IncrementalTaskInputs.removed(org.gradle.api.Action):

Example 334. Running the incremental task with an input file removed

build.gradle

task removeInput() {
    doLast {
        file('inputs/3.txt').delete()
    }
}

Output of gradle -q removeInput incrementalReverse

> gradle -q removeInput incrementalReverse
CHANGED inputs considered out of date
removed: 3.txt

When an output file is deleted (or modified), then Gradle is unable to determine which input files are out of date. In this case, all input files are reported to the IncrementalTaskInputs.outOfDate(org.gradle.api.Action) action, and no input files are reported to the IncrementalTaskInputs.removed(org.gradle.api.Action) action:

Example 335. Running the incremental task with an output file removed

build.gradle

task removeOutput() {
    doLast {
        file("$buildDir/outputs/1.txt").delete()
    }
}

Output of gradle -q removeOutput incrementalReverse

> gradle -q removeOutput incrementalReverse
ALL inputs considered out of date
out of date: 1.txt
out of date: 2.txt
out of date: 3.txt

When a task input property is modified, Gradle is unable to determine how this property impacted the task outputs, so all input files are assumed to be out of date. So similar to the changed output file example, all input files are reported to the IncrementalTaskInputs.outOfDate(org.gradle.api.Action) action, and no input files are reported to the IncrementalTaskInputs.removed(org.gradle.api.Action) action:

Example 336. Running the incremental task with an input property changed

Output of gradle -q -PtaskInputProperty=changed incrementalReverse

> gradle -q -PtaskInputProperty=changed incrementalReverse
ALL inputs considered out of date
out of date: 1.txt
out of date: 2.txt
out of date: 3.txt

Storing incremental state for cached tasks

Using Gradle’s IncrementalTaskInputs is not the only way to create tasks that only works on changes since the last execution. Tools like the Kotlin compiler provide incrementality as a built-in feature. The way this is typically implemented is that the tool stores some analysis data about the state of the previous execution in some file. If such state files are relocatable, then they can be declared as outputs of the task. This way when the task’s results are loaded from cache, the next execution can already use the analysis data loaded from cache, too.

However, if the state files are non-relocatable, then they can’t be shared via the build cache. Indeed, when the task is loaded from cache, any such state files must be cleaned up to prevent stale state to confuse the tool during the next execution. Gradle can ensure such stale files are removed if they are declared via task.localState.register() or a property is marked with the @LocalState annotation.

The Worker API

The Worker API is an incubating feature.

As can be seen from the discussion of incremental tasks, the work that a task performs can be viewed as discrete units (i.e. a subset of inputs that are transformed to a certain subset of outputs). Many times, these units of work are highly independent of each other, meaning they can be performed in any order and simply aggregated together to form the overall action of the task. In a single threaded execution, these units of work would execute in sequence, however if we have multiple processors, it would be desirable to perform independent units of work concurrently. By doing so, we can fully utilize the available resources at build time and complete the activity of the task faster.

The Worker API provides a mechanism for doing exactly this. It allows for safe, concurrent execution of multiple items of work during a task action. But the benefits of the Worker API are not confined to parallelizing the work of a task. You can also configure a desired level of isolation such that work can be executed in an isolated classloader or even in an isolated process. Furthermore, the benefits extend beyond even the execution of a single task. Using the Worker API, Gradle can begin to execute tasks in parallel by default. In other words, once a task has submitted its work to be executed asynchronously, and has exited the task action, Gradle can then begin the execution of other independent tasks in parallel, even if those tasks are in the same project.

Using the Worker API

In order to submit work to the Worker API, two things must be provided: an implementation of the unit of work, and a configuration for the unit of work. The implementation is simply a class that extends java.lang.Runnable. This class should have a constructor that is annotated with javax.inject.Inject and accepts parameters that configure the class for a single unit of work. When a unit of work is submitted to the WorkerExecutor, an instance of this class will be created and the parameters configured for the unit of work will be passed to the constructor.

Example 337. Creating a unit of work implementation

build.gradle

import org.gradle.workers.WorkerExecutor

import javax.inject.Inject

// The implementation of a single unit of work
class ReverseFile implements Runnable {
    File fileToReverse
    File destinationFile

    @Inject
    public ReverseFile(File fileToReverse, File destinationFile) {
        this.fileToReverse = fileToReverse
        this.destinationFile = destinationFile
    }

    @Override
    public void run() {
        destinationFile.text = fileToReverse.text.reverse()
    }
}

The configuration of the worker is represented by a WorkerConfiguration and is set by configuring an instance of this object at the time of submission. However, in order to submit the unit of work, it is necessary to first acquire the WorkerExecutor. To do this, a constructor should be provided that is annotated with javax.inject.Inject and accepts a WorkerExecutor parameter. Gradle will inject the instance of WorkerExecutor at runtime when the task is created.

Example 338. Submitting a unit of work for execution

build.gradle

class ReverseFiles extends SourceTask {
    final WorkerExecutor workerExecutor

    @OutputDirectory
    File outputDir

    // The WorkerExecutor will be injected by Gradle at runtime
    @Inject
    public ReverseFiles(WorkerExecutor workerExecutor) {
        this.workerExecutor = workerExecutor
    }

    @TaskAction
    void reverseFiles() {
        // Create and submit a unit of work for each file
        source.files.each { file ->
            workerExecutor.submit(ReverseFile.class) { WorkerConfiguration config ->
                // Use the minimum level of isolation
                config.isolationMode = IsolationMode.NONE

                // Constructor parameters for the unit of work implementation
                config.params file, project.file("${outputDir}/${file.name}")
            }
        }
    }
}

Note that one element of the WorkerConfiguration is the params property. These are the parameters passed to the constructor of the unit of work implementation for each item of work submitted. Any parameters provided to the unit of work must be java.io.Serializable.

Once all of the work for a task action has been submitted, it is safe to exit the task action. The work will be executed asynchronously and in parallel (up to the setting of max-workers). Of course, any tasks that are dependent on this task (and any subsequent task actions of this task) will not begin executing until all of the asynchronous work completes. However, other independent tasks that have no relationship to this task can begin executing immediately.

If any failures occur while executing the asynchronous work, the task will fail and a WorkerExecutionException will be thrown detailing the failure for each failed work item. This will be treated like any failure during task execution and will prevent any dependent tasks from executing.

In some cases, however, it might be desirable to wait for work to complete before exiting the task action. This is possible using the WorkerExecutor.await() method. As in the case of allowing the work to complete asynchronously, any failures that occur while executing an item of work will be surfaced as a WorkerExecutionException thrown from the WorkerExecutor.await() method.

Note that Gradle will only begin running other independent tasks in parallel when a task has exited a task action and returned control of execution to Gradle. When WorkerExecutor.await() is used, execution does not leave the task action. This means that Gradle will not allow other tasks to begin executing and will wait for the task action to complete before doing so.

Example 339. Waiting for asynchronous work to complete

build.gradle

// Create and submit a unit of work for each file
source.files.each { file ->
    workerExecutor.submit(ReverseFile.class) { config ->
        config.isolationMode = IsolationMode.NONE
        // Constructor parameters for the unit of work implementation
        config.params file, project.file("${outputDir}/${file.name}")
    }
}

// Wait for all asynchronous work to complete before continuing
workerExecutor.await()
logger.lifecycle("Created ${outputDir.listFiles().size()} reversed files in ${project.relativePath(outputDir)}")

Isolation Modes

Gradle provides three isolation modes that can be configured on a unit of work and are specified using the IsolationMode enum:

IsolationMode.NONE

This states that the work should be run in a thread with a minimum of isolation. For instance, it will share the same classloader that the task is loaded from. This is the fastest level of isolation.

IsolationMode.CLASSLOADER

This states that the work should be run in a thread with an isolated classloader. The classloader will have the classpath from the classloader that the unit of work implementation class was loaded from as well as any additional classpath entries added through WorkerConfiguration.classpath(java.lang.Iterable).

IsolationMode.PROCESS

This states that the work should be run with a maximum level of isolation by executing the work in a separate process. The classloader of the process will use the classpath from the classloader that the unit of work was loaded from as well as any additional classpath entries added through WorkerConfiguration.classpath(java.lang.Iterable). Furthermore, the process will be a Worker Daemon which will stay alive and can be reused for future work items that may have the same requirements. This process can be configured with different settings than the Gradle JVM using WorkerConfiguration.forkOptions(org.gradle.api.Action).

Worker Daemons

When using IsolationMode.PROCESS, gradle will start a long-lived Worker Daemon process that can be reused for future work items.

Example 340. Submitting an item of work to run in a worker daemon

build.gradle

workerExecutor.submit(ReverseFile.class) { WorkerConfiguration config ->
    // Run this work in an isolated process
    config.isolationMode = IsolationMode.PROCESS

    // Configure the options for the forked process
    config.forkOptions { JavaForkOptions options ->
        options.maxHeapSize = "512m"
        options.systemProperty "org.gradle.sample.showFileSize", "true"
    }

    // Constructor parameters for the unit of work implementation
    config.params file, project.file("${outputDir}/${file.name}")
}

When a unit of work for a Worker Daemon is submitted, Gradle will first look to see if a compatible, idle daemon already exists. If so, it will send the unit of work to the idle daemon, marking it as busy. If not, it will start a new daemon. When evaluating compatibility, Gradle looks at a number of criteria, all of which can be controlled through WorkerConfiguration.forkOptions(org.gradle.api.Action).

executable

A daemon is considered compatible only if it uses the same java executable.

classpath

A daemon is considered compatible if its classpath contains all of the classpath entries requested. Note that a daemon is considered compatible if it has more classpath entries in addition to those requested.

heap settings

A daemon is considered compatible if it has at least the same heap size settings as requested. In other words, a daemon that has higher heap settings than requested would be considered compatible.

jvm arguments

A daemon is considered compatible if it has set all of the jvm arguments requested. Note that a daemon is considered compatible if it has additional jvm arguments beyond those requested (except for arguments treated specially such as heap settings, assertions, debug, etc).

system properties

A daemon is considered compatible if it has set all of the system properties requested with the same values. Note that a daemon is considered compatible if it has additional system properties beyond those requested.

environment variables

A daemon is considered compatible if it has set all of the environment variables requested with the same values. Note that a daemon is considered compatible if it has more environment variables in addition to those requested.

bootstrap classpath

A daemon is considered compatible if it contains all of the bootstrap classpath entries requested. Note that a daemon is considered compatible if it has more bootstrap classpath entries in addition to those requested.

debug

A daemon is considered compatible only if debug is set to the same value as requested (true or false).

enable assertions

A daemon is considered compatible only if enable assertions is set to the same value as requested (true or false).

default character encoding

A daemon is considered compatible only if the default character encoding is set to the same value as requested.

Worker daemons will remain running until either the build daemon that started them is stopped, or system memory becomes scarce. When available system memory is low, Gradle will begin stopping worker daemons in an attempt to minimize memory consumption.

Re-using logic between task classes

There are different ways to re-use logic between task classes. The easiest case is when you can extract the logic you want to share in a separate method or class and then use the extracted piece of code in your tasks. For example, the Copy task re-uses the logic of the Project.copy(org.gradle.api.Action) method. Another option is to add a task dependency on the task which outputs you want to re-use. Other options include using task rules or the worker API.

Writing Custom Plugins

A Gradle plugin packages up reusable pieces of build logic, which can be used across many different projects and builds. Gradle allows you to implement your own plugins, so you can reuse your build logic, and share it with others.

You can implement a Gradle plugin in any language you like, provided the implementation ends up compiled as bytecode. In our examples, we are going to use Groovy as the implementation language. Groovy, Java or Kotlin are all good choices as the language to use to implement a plugin, as the Gradle API has been designed to work well with these languages. In general, a plugin implemented using Java or Kotlin, which are statically typed, will perform better than the same plugin implemented using Groovy.

Packaging a plugin

There are several places where you can put the source for the plugin.

Build script

You can include the source for the plugin directly in the build script. This has the benefit that the plugin is automatically compiled and included in the classpath of the build script without you having to do anything. However, the plugin is not visible outside the build script, and so you cannot reuse the plugin outside the build script it is defined in.

buildSrc project

You can put the source for the plugin in the rootProjectDir/buildSrc/src/main/groovy directory. Gradle will take care of compiling and testing the plugin and making it available on the classpath of the build script. The plugin is visible to every build script used by the build. However, it is not visible outside the build, and so you cannot reuse the plugin outside the build it is defined in.

See Organizing Build Logic for more details about the buildSrc project.

Standalone project

You can create a separate project for your plugin. This project produces and publishes a JAR which you can then use in multiple builds and share with others. Generally, this JAR might include some plugins, or bundle several related task classes into a single library. Or some combination of the two.

In our examples, we will start with the plugin in the build script, to keep things simple. Then we will look at creating a standalone project.

Writing a simple plugin

To create a Gradle plugin, you need to write a class that implements the Plugin interface. When the plugin is applied to a project, Gradle creates an instance of the plugin class and calls the instance’s Plugin.apply(T) method. The project object is passed as a parameter, which the plugin can use to configure the project however it needs to. The following sample contains a greeting plugin, which adds a hello task to the project.

Example 341. A custom plugin

build.gradle

class GreetingPlugin implements Plugin<Project> {
    void apply(Project project) {
        project.task('hello') {
            doLast {
                println 'Hello from the GreetingPlugin'
            }
        }
    }
}

// Apply the plugin
apply plugin: GreetingPlugin

Output of gradle -q hello

> gradle -q hello
Hello from the GreetingPlugin

One thing to note is that a new instance of a plugin is created for each project it is applied to. Also note that the Plugin class is a generic type. This example has it receiving the Project type as a type parameter. A plugin can instead receive a parameter of type Settings, in which case the plugin can be applied in a settings script, or a parameter of type Gradle, in which case the plugin can be applied in an initialization script.

Making the plugin configurable

Most plugins need to obtain some configuration from the build script. One method for doing this is to use extension objects. The Gradle Project has an associated ExtensionContainer object that contains all the settings and properties for the plugins that have been applied to the project. You can provide configuration for your plugin by adding an extension object to this container. An extension object is simply a Java Bean compliant class. Groovy is a good language choice to implement an extension object because plain old Groovy objects contain all the getter and setter methods that a Java Bean requires. Java and Kotlin are other good choices.

Let’s add a simple extension object to the project. Here we add a greeting extension object to the project, which allows you to configure the greeting.

Example 342. A custom plugin extension

build.gradle

class GreetingPluginExtension {
    String message = 'Hello from GreetingPlugin'
}

class GreetingPlugin implements Plugin<Project> {
    void apply(Project project) {
        // Add the 'greeting' extension object
        def extension = project.extensions.create('greeting', GreetingPluginExtension)
        // Add a task that uses configuration from the extension object
        project.task('hello') {
            doLast {
                println extension.message
            }
        }
    }
}

apply plugin: GreetingPlugin

// Configure the extension
greeting.message = 'Hi from Gradle'

Output of gradle -q hello

> gradle -q hello
Hi from Gradle

In this example,