SQLFluff accepts configuration either through the command line or through configuration files. There is rough parity between the two approaches with the exception that templating configuration must be done via a file, because it otherwise gets slightly complicated.

For details of what’s available on the command line check out the CLI Reference.

For file based configuration SQLFluff will look for the following files in order. Later files will (if found) will be used to overwrite any values read from earlier files.

  • setup.cfg

  • tox.ini

  • pep8.ini

  • .sqlfluff

  • pyproject.toml

Within these files, the first four will be read like a cfg file, and SQLFluff will look for sections which start with sqlfluff, and where subsections are delimited by a semicolon. For example the jinjacontext section will be indicated in the section started with [sqlfluff:jinjacontext].

For example, a snippet from a .sqlfluff file (as well as any of the supported cfg file types):

templater = jinja
sql_file_exts = .sql,.sql.j2,.dml,.ddl

indented_joins = false
indented_using_on = true
template_blocks_indent = false

unwrap_wrapped_queries = true

apply_dbt_builtins = true

For the pyproject.toml file, all valid sections start with tool.sqlfluff and subsections are delimited by a dot. For example the jinjacontext section will be indicated in the section started with [tool.sqlfluff.jinjacontext].

For example, a snippet from a pyproject.toml file:

templater = "jinja"
sql_file_exts = ".sql,.sql.j2,.dml,.ddl"

indented_joins = false
indented_using_on = true
template_blocks_indent = false

unwrap_wrapped_queries = true

apply_dbt_builtins = true

# For rule specific configuration, use dots between the names exactly
# as you would in .sqlfluff. In the background, SQLFluff will unpack the
# configuration paths accordingly.
capitalisation_policy = "upper"


SQLFluff uses nesting in its configuration files, with files closer overriding (or patching, if you will) values from other files. That means you’ll end up with a final config which will be a patchwork of all the values from the config files loaded up to that path. The exception to this is the value for templater, which cannot be set in config files in subdirectories of the working directory. You don’t need any config files to be present to make SQLFluff work. If you do want to override any values though SQLFluff will use files in the following locations in order, with values from later steps overriding those from earlier:

  1. […and this one doesn’t really count] There’s a default config as part of the SQLFluff package. You can find this below, in the Default Configuration section.

  2. It will look in the user’s os-specific app config directory. On OSX this is ~/Library/Preferences/sqlfluff, Unix is ~/.config/sqlfluff, Windows is <home>\AppData\Local\sqlfluff\sqlfluff, for any of the filenames above in the main Configuration section. If multiple are present, they will patch/override each other in the order above.

  3. It will look for the same files in the user’s home directory (~).

  4. It will look for the same files in the current working directory.

  5. [if parsing a file in a subdirectory of the current working directory] It will look for the same files in every subdirectory between the current working dir and the file directory.

  6. It will look for the same files in the directory containing the file being linted.

This whole structure leads to efficient configuration, in particular in projects which utilise a lot of complicated templating.

In-File Configuration Directives

In addition to configuration files mentioned above, SQLFluff also supports comment based configuration switching in files. This allows specific SQL file to modify a default configuration if they have specific needs.

When used, these apply to the whole file, and are parsed from the file in an initial step before the rest of the file is properly parsed. This means they can be used for both rule configuration and also for parsing configuration.

To use these, the syntax must start as an inline sql comment beginning with sqlfluff (i.e. -- sqlfluff). The line is then interpreted as a colon-seperated address of the configuation value you wish to set. A few common examples are shown below:

-- Set Indented Joins
-- sqlfluff:indentation:indented_joins:true

-- Set a smaller indent for this file
-- sqlfluff:indentation:tab_space_size:2

-- Set keywords to be capitalised
-- sqlfluff:rules:capitalisation.keywords:capitalisation_policy:upper


We recommend only using this configuration approach for configuration that applies to one file in isolation. For configuration changes for areas of a project or for whole projects we recommend Nesting of configuration files.

Rule Configuration

Rules can be configured with the .sqlfluff config files.

Common rule configurations can be set in the [sqlfluff:rules] section.

For example:

allow_scalar = True
single_table_references = consistent
unquoted_identifiers_policy = all

Rule specific configurations are set in rule specific subsections.

For example, enforce that keywords are upper case by configuring the rule CP01:

# Keywords
capitalisation_policy = upper

All possible options for rule sections are documented in Rules Reference.

For an overview of the most common rule configurations that you may want to tweak, see Default Configuration (and use Rules Reference to find the available alternatives).

Enabling and Disabling Rules

The decision as to which rules are applied to a given file is applied on a file by file basis, by the effective configuration for that file. There are two configuration values which you can use to set this:

  • rules, which explicitly enables the specified rules. If this parameter is unset or empty for a file, this implies “no selection” and so “all rules” is taken to be the meaning.

  • exclude_rules, which explicitly disables the specified rules. This parameter is applied after the rules parameter so can be used to subtract from the otherwise enabled set.

Each of these two configuration values accept a comma separated list of references. Each of those references can be:

  • a rule code e.g. LN01

  • a rule name e.g. layout.indent

  • a rule alias, which is often a deprecated code e.g. L003

  • a rule group e.g. layout or capitalisation

These different references can be mixed within a given expression, which results in a very powerful syntax for selecting exactly which rules are active for a given file.


It’s worth mentioning here that the application of rules and exclude_rules, with groups, aliases and names, in projects with potentially multiple nested configuration files defining different rules for different areas of a project can get very confusing very fast. While this flexibility is intended for users to take advantage of, we do have some recommendations about how to do this is a way that remains manageable.

When considering configuration inheritance, each of rules and exclude_rules will totally overwrite any values in parent config files if they are set in a child file. While the subtraction operation between both of them is calculated “per file”, there is no combination operation between two definitions of rules (just one overwrites the other).

The effect of this is that we recommend one of two approaches:

  1. Simply only use rules. This has the upshot of each area of your project being very explicit in which rules are enabled. When that changes for part of your project you just reset the whole list of applicable rules for that part of the project.

  2. Set a single rules value in your master project config file and then only use exclude_rules in sub-configuration files to turn off specific rules for parts of the project where those rules are inappropriate. This keeps the simplicity of only having one value which is inherited, but allows slightly easier and simpler rollout of new rules because we manage by exception.

For example, to disable the rules LT08 and RF02:

exclude_rules = LT08, RF02

To enable individual rules, configure rules, respectively.

For example, to enable RF02:

rules = RF02

Rules can also be enabled/disabled by their grouping. Right now, the only rule grouping is core. This will enable (or disable) a select group of rules that have been deemed ‘core rules’.

rules = core

More information about ‘core rules’ can be found in the Rules Reference.

Additionally, some rules have a special force_enable configuration option, which allows to enable the given rule even for dialects where it is disabled by default. The rules that support this can be found in the Rules Reference.

The default values can be seen in Default Configuration.

See also: Ignoring Errors & Files.

Downgrading rules to warnings

To keep displaying violations for specific rules, but not have those issues lead to a failed run, rules can be downgraded to warnings. Rules set as warnings won’t cause a file to fail, but will still be shown in the CLI to warn users of their presence.

The configuration of this behaves very like exclude_rules above:

warnings = LT01, LT04

With this configuration, files with no other issues (other than those set to warn) will pass. If there are still other issues, then the file will still fail, but will show both warnings and failures.

== [test.sql] PASS
L:   2 | P:   9 | LT01 | WARNING: Missing whitespace before +
== [test2.sql] FAIL
L:   2 | P:   8 | CP02 | Unquoted identifiers must be consistently upper case.
L:   2 | P:  11 | LT01 | WARNING: Missing whitespace before +

This is particularly useful as a transitional tool when considering the introduction on new rules on a project where you might want to make users aware of issues without blocking their workflow (yet).

Layout & Spacing Configuration

The [sqlfluff:layout] section of the config controls the treatment of spacing and line breaks across all rules. To understand more about this section, see the section of the docs dedicated to layout: Configuring Layout.

Jinja Templating Configuration

When thinking about Jinja templating there are two different kinds of things that a user might want to fill into a templated file, variables and functions/macros. Currently functions aren’t implemented in any of the templaters.

Variable Templating

Variables are available in the jinja, python and placeholder templaters. By default the templating engine will expect variables for templating to be available in the config, and the templater will be look in the section corresponding to the context for that templater. By convention, the config for the jinja templater is found in the sqlfluff:templater:jinja:context section, the config for the *python templater is found in the sqlfluff:templater:python:context section, the one for the placeholder templater is found in the sqlfluff:templater:placeholder:context section

For example, if passed the following .sql file:

SELECT {{ num_things }} FROM {{ tbl_name }} WHERE id > 10 LIMIT 5

…and the following configuration in .sqlfluff in the same directory:


…then before parsing, the sql will be transformed to:

SELECT 456 FROM my_table WHERE id > 10 LIMIT 5


If there are variables in the template which cannot be found in the current configuration context, then this will raise a SQLTemplatingError and this will appear as a violation without a line number, quoting the name of the variable that couldn’t be found.

Placeholder templating

Libraries such as SQLAlchemy or Psycopg use different parameter placeholder styles to mark where a parameter has to be inserted in the query.

For example a query in SQLAlchemy can look like this:

SELECT * FROM table WHERE id = :myid

At runtime :myid will be replace by a value provided by the application and escaped as needed, but this is not standard SQL and cannot be parsed as is.

In order to parse these queries is then necessary to replace these placeholders with sample values, and this is done with the placeholder templater.

Placeholder templating can be enabled in the config using:

templater = placeholder

A few common styles are supported:

 -- colon
 WHERE bla = :my_name

 -- colon_nospaces
 -- (use with caution as more prone to false positives)
 WHERE bla = table:my_name

 -- numeric_colon
 WHERE bla = :2

 -- pyformat
 WHERE bla = %(my_name)s

 -- dollar
 WHERE bla = $my_name or WHERE bla = ${my_name}

 -- question_mark
 WHERE bla = ?

 -- numeric_dollar
 WHERE bla = $3 or WHERE bla = ${3}

 -- percent
 WHERE bla = %s

 -- ampersand
 WHERE bla = &s or WHERE bla = &{s} or USE DATABASE MARK_{ENV}

These can be configured by setting param_style to the names above:

param_style = colon
my_name = 'john'

then it is necessary to set sample values for each parameter, like my_name above. Notice that the value needs to be escaped as it will be replaced as a string during parsing.

When parameters are positional, like question_mark, then their name is simply the order in which they appear, starting with 1.

param_style = question_mark
1 = 'john'

In case you need a parameter style different from the ones above, you can pass a custom regex.

param_regex = __(?P<param_name>[\w_]+)__
my_name = 'john'

N.B. quotes around param_regex in the config are interpreted literally by the templater. e.g. param_regex=’__(?P<param_name>[w_]+)__’ matches ‘__some_param__’ not __some_param__

the named parameter param_name will be used as the key to replace, if missing, the parameter is assumed to be positional and numbers are used instead.

Also consider making a pull request to the project to have your style added, it may be useful to other people and simplify your configuration.

Complex Variable Templating

Two more advanced features of variable templating are case sensitivity and native python types. Both are illustrated in the following example:

my_list = ['a', 'b', 'c']
MY_LIST = ("d", "e", "f")
my_where_dict = {"field_1": 1, "field_2": 2}
    {% for elem in MY_LIST %}
        '{{elem}}' {% if not loop.last %}||{% endif %}
    {% endfor %} as concatenated_list
FROM tbl
    {% for field, value in my_where_dict.items() %}
        {{field}} = {{value}} {% if not loop.last %}and{% endif %}
    {% endfor %}

…will render as…

    'd' || 'e' || 'f' as concatenated_list
FROM tbl
    field_1 = 1 and field_2 = 2

Note that the variable was replaced in a case sensitive way and that the settings in the config file were interpreted as native python types.

Macro Templating

Macros (which also look and feel like functions are available only in the jinja templater. Similar to Variable Templating, these are specified in config files, what’s different in this case is how they are named. Similar to the context section above, macros are configured separately in the macros section of the config. Consider the following example.

If passed the following .sql file:

SELECT {{ my_macro(6) }} FROM some_table

…and the following configuration in .sqlfluff in the same directory (note the tight control of whitespace):

a_macro_def = {% macro my_macro(n) %}{{ n }} + {{ n * 2 }}{% endmacro %}

…then before parsing, the sql will be transformed to:

SELECT 6 + 12 FROM some_table

Note that in the code block above, the variable name in the config is a_macro_def, and this isn’t apparently otherwise used anywhere else. Broadly this is accurate, however within the configuration loader this will still be used to overwrite previous values in other config files. As such this introduces the idea of config blocks which could be selectively overwritten by other configuration files downstream as required.

In addition to macros specified in the config file, macros can also be loaded from files or folders. This is specified in the config file:

load_macros_from_path = my_macros

load_macros_from_path is a comma-separated list of .sql files or folders. Locations are relative to the config file. For example, if the config file above was found at /home/my_project/.sqlfluff, then SQLFluff will look for macros in the folder /home/my_project/my_macros/ (but not subfolders). Any macros defined in the config will always take precedence over a macro defined in the path.

  • .sql files: Macros in these files are available in every .sql file without requiring a Jinja include or import.

  • Folders: To use macros from the .sql files in folders, use Jinja include or import as explained below.

Note: The load_macros_from_path setting also defines the search path for Jinja include or import. Unlike with macros (as noted above), subdirectories are supported. For example, if load_macros_from_path is set to my_macros, and there is a file my_macros/subdir/my_file.sql, you can do:

{% include 'subdir/include_comment.sql' %}


Throughout the templating process whitespace will still be treated rigorously, and this includes newlines. In particular you may choose to provide dummy macros in your configuration different from the actual macros used in production.

REMEMBER: The reason SQLFluff supports macros is to enable it to parse templated sql without it being a blocker. It shouldn’t be a requirement that the templating is accurate - it only needs to work well enough that parsing and linting are helpful.

Builtin Macro Blocks

One of the main use cases which inspired SQLFluff as a project was dbt. It uses jinja templating extensively and leads to some users maintaining large repositories of sql files which could potentially benefit from some linting.


SQLFluff has now a tighter integration with dbt through the “dbt” templater. It is the recommended templater for dbt projects. If used, it eliminates the need for the overrides described in this section.

To use the dbt templater, go to dbt Project Configuration.

SQLFluff anticipates this use case and provides some built in macro blocks in the Default Configuration which assist in getting started with dbt projects. In particular it provides mock objects for:

  • ref: The mock version of this provided simply returns the model reference as the name of the table. In most cases this is sufficient.

  • config: A regularly used macro in dbt to set configuration values. For linting purposes, this makes no difference and so the provided macro simply returns nothing.


If there are other builtin macros which would make your life easier, consider submitting the idea (or even better a pull request) on github.

Interaction with --ignore=templating

Ignoring Jinja templating errors provides a way for users to use SQLFluff while reducing or avoiding the need to spend a lot of time adding variables to [sqlfluff:templater:jinja:context].

When --ignore=templating is enabled, the Jinja templater behaves a bit differently. This additional behavior is usually but not always helpful for making the file at least partially parsable and fixable. It definitely doesn’t guarantee that every file can be fixed, but it’s proven useful for some users.

Here’s how it works:

  • Within the expanded SQL, undefined variables are automatically replaced with the corresponding string value.

  • If you do: {% include query %}, and the variable query is not defined, it returns a “file” containing the string query.

  • If you do: {% include "query_file.sql" %}, and that file does not exist or you haven’t configured a setting for load_macros_from_path, it returns a “file” containing the text query_file.

For example:

select {{ my_variable }}
from {% include "my_table.sql" %}

is interpreted as:

select my_variable
from my_table

The values provided by the Jinja templater act a bit (not exactly) like a mixture of several types:

  • str

  • int

  • list

  • Jinja’s Undefined class

Because the values behave like Undefined, it’s possible to replace them using Jinja’s default() filter. For example:

select {{ my_variable | default("col_a") }}
from my_table

is interpreted as:

select col_a
from my_table

Library Templating

If using SQLFluff for dbt with jinja as your templater, you may have library function calls within your sql files that can not be templated via the normal macro templating mechanisms:

SELECT foo, bar FROM baz {{ dbt_utils.group_by(2) }}

To template these libraries, you can use the sqlfluff:jinja:library_path config option:

library_path = sqlfluff_libs

This will pull in any python modules from that directory and allow sqlfluff to use them in templates. In the above example, you might define a file at sqlfluff_libs/ as:

def group_by(n):
    return "GROUP BY 1,2"

If an is detected, it will be loaded alongside any modules and submodules found within the library path.

   {{ custom_sum('foo', 'bar') }},
   {{'foo', 'bar') }}


def custom_sum(a: str, b: str) -> str:
    return a + b


# empty file


def another_sum(a: str, b: str) -> str:
   return a + b

dbt Project Configuration


From sqlfluff version 0.7.0 onwards, the dbt templater has been moved to a separate plugin and python package. Projects that were already using the dbt templater may initially fail after an upgrade to 0.7.0+. See the installation instructions below to install the dbt templater.

dbt templating is still a relatively new feature added in 0.4.0 and is still in very active development! If you encounter an issue, please let us know in a GitHub issue or on the SQLFluff slack workspace.

dbt is not the default templater for SQLFluff (it is jinja). dbt is a complex tool, so using the default jinja templater will be simpler. You should be aware when using the dbt templater that you will be exposed to some of the complexity of dbt. Users may wish to try both templaters and choose according to how they intend to use SQLFluff.

A simple rule of thumb might be:

  • If you are using SQLFluff in a CI/CD context, where speed is not critical but accuracy in rendering sql is, then the dbt templater may be more appropriate.

  • If you are using SQLFluff in an IDE or on a git hook, where speed of response may be more important, then the jinja templater may be more appropriate.


  • Most (potentially all) macros will work


  • More complex, e.g. using it successfully may require deeper understanding of your models and/or macros (including third-party macros)

    • More configuration decisions to make

    • Best practices are not yet established or documented

  • If your dbt model files access a database at compile time, using SQLFluff with the dbt templater will also require access to a database.

    • Note that you can often point SQLFluff and the dbt templater at a test database (i.e. it doesn’t have to be the production database).

  • Runs slower

Installation & Configuration

In order to get started using SQLFluff with a dbt project you will first need to install the relevant dbt adapter for your dialect and the sqlfluff-templater-dbt package using your package manager of choice (e.g. pip install dbt-postgres sqlfluff-templater-dbt) and then will need the following configuration:

In .sqlfluff:

templater = dbt

In .sqlfluffignore:

# dbt <1.0.0
# dbt >=1.0.0

You can set the dbt project directory, profiles directory and profile with:

project_dir = <relative or absolute path to dbt_project directory>
profiles_dir = <relative or absolute path to the directory that contains the profiles.yml file>
profile = <dbt profile>
target = <dbt target>


If the profiles_dir setting is omitted, SQLFluff will look for the profile in the default location, which varies by operating system. On Unix-like operating systems (e.g. Linux or macOS), the default profile directory is ~/.dbt/. On Windows, you can determine your default profile directory by running dbt debug –config-dir.

To use builtin dbt Jinja functions SQLFluff provides a configuration option that enables usage within templates.

apply_dbt_builtins = True

This will provide dbt macros like ref, var, is_incremental(). If the need arises builtin dbt macros can be customised via Jinja macros in .sqlfluff configuration file.

# Macros provided as builtins for dbt projects
dbt_ref = {% macro ref(model_ref) %}{{model_ref}}{% endmacro %}
dbt_source = {% macro source(source_name, table) %}{{source_name}}_{{table}}{% endmacro %}
dbt_config = {% macro config() %}{% for k in kwargs %}{% endfor %}{% endmacro %}
dbt_var = {% macro var(variable, default='') %}item{% endmacro %}
dbt_is_incremental = {% macro is_incremental() %}True{% endmacro %}

If your project requires that you pass variables to dbt through command line, you can specify them in template:dbt:context section of .sqlfluff. See below configuration and its equivalent dbt command:

my_variable = 1
dbt run --vars '{"my_variable": 1}'

Known Caveats

  • To use the dbt templater, you must set templater = dbt in the .sqlfluff config file in the directory where sqlfluff is run. The templater cannot be changed in .sqlfluff files in subdirectories.

  • In SQLFluff 0.4.0 using the dbt templater requires that all files within the root and child directories of the dbt project must be part of the project. If there are deployment scripts which refer to SQL files not part of the project for instance, this will result in an error. You can overcome this by adding any non-dbt project SQL files to .sqlfluffignore.

CLI Arguments

You already know you can pass arguments (--verbose, --exclude-rules, etc.) through the CLI commands (lint, fix, etc.):

$ sqlfluff lint my_code.sql -v --exclude-rules LT08,RF02

You might have arguments that you pass through every time, e.g rules you always want to ignore. These can also be configured:

verbose = 1
exclude_rules = LT08,RF02

Note that while the exclude_rules config looks similar to the above example, the verbose config has an integer value. This is because verbose is stackable meaning there are multiple levels of verbosity that are available for configuration. See CLI Reference for more details about the available CLI arguments. For more details about rule exclusion, see Enabling and Disabling Rules.

Ignoring Errors & Files

Ignoring individual lines

Similar to flake8’s ignore, individual lines can be ignored by adding -- noqa to the end of the line. Additionally, specific rules can be ignored by quoting their code or the category.

-- Ignore all errors
SeLeCt  1 from tBl ;    -- noqa

-- Ignore rule CP02 & rule CP03
SeLeCt  1 from tBl ;    -- noqa: CP02,CP03

-- Ignore all parsing errors
SeLeCt from tBl ;       -- noqa: PRS

Ignoring line ranges

Similar to pylint’s “pylint” directive”, ranges of lines can be ignored by adding -- noqa:disable=<rule>[,...] | all to the line. Following this directive, specified rules (or all rules, if “all” was specified) will be ignored until a corresponding – noqa:enable=<rule>[,…] | all directive.

-- Ignore rule AL02 from this line forward
SELECT col_a a FROM foo -- noqa: disable=AL02

-- Ignore all rules from this line forward
SELECT col_a a FROM foo -- noqa: disable=all

-- Enforce all rules from this line forward
SELECT col_a a FROM foo -- noqa: enable=all

Ignoring types of errors

General categories of errors can be ignored using the --ignore command line option or the ignore setting in .sqlfluffignore. Types of errors that can be ignored include:

  • lexing

  • linting

  • parsing

  • templating


Similar to Git’s .gitignore and Docker’s .dockerignore, SQLFluff supports a .sqlfluffignore file to control which files are and aren’t linted. Under the hood we use the python pathspec library which also has a brief tutorial in their documentation.

An example of a potential .sqlfluffignore placed in the root of your project would be:

# Comments start with a hash.

# Ignore anything in the "temp" path

# Ignore anything called "testing.sql"

# Ignore any ".tsql" files

Ignore files can also be placed in subdirectories of a path which is being linted and the sub files will also be applied within that subdirectory.

Default Configuration

The default configuration is as follows, note the Builtin Macro Blocks in section [sqlfluff:templater:jinja:macros] as referred to above.

  2# verbose is an integer (0-2) indicating the level of log output
  3verbose = 0
  4# Turn off color formatting of output
  5nocolor = False
  6# Supported dialects
  7# Or run 'sqlfluff dialects'
  8dialect = None
  9# One of [raw|jinja|python|placeholder]
 10templater = jinja
 11# Comma separated list of rules to check, default to all
 12rules = all
 13# Comma separated list of rules to exclude, or None
 14exclude_rules = None
 15# The depth to recursively parse to (0 for unlimited)
 16recurse = 0
 17# Below controls SQLFluff output, see max_line_length for SQL output
 18output_line_length = 80
 19# Number of passes to run before admitting defeat
 20runaway_limit = 10
 21# Ignore errors by category (one or more of the following, separated by commas: lexing,linting,parsing,templating)
 22ignore = None
 23# Warn only for rule codes (one of more rule codes, seperated by commas: e.g. LT01,LT02)
 24# Also works for templating and parsing errors by using TMP or PRS
 25warnings = None
 26# Ignore linting errors found within sections of code coming directly from
 27# templated code (e.g. from within Jinja curly braces. Note that it does not
 28# ignore errors from literal code found within template loops.
 29ignore_templated_areas = True
 30# can either be autodetect or a valid encoding e.g. utf-8, utf-8-sig
 31encoding = autodetect
 32# Ignore inline overrides (e.g. to test if still required)
 33disable_noqa = False
 34# Comma separated list of file extensions to lint
 35# NB: This config will only apply in the root folder
 36sql_file_exts = .sql,.sql.j2,.dml,.ddl
 37# Allow fix to run on files, even if they contain parsing errors
 38# Note altering this is NOT RECOMMENDED as can corrupt SQL
 39fix_even_unparsable = False
 40# Very large files can make the parser effectively hang.
 41# The more efficient check is the _byte_ limit check which
 42# is enabled by default. The previous _character_ limit check
 43# is still present for backward compatibility. This will be
 44# removed in a future version.
 45# Set either to 0 to disable.
 46large_file_skip_char_limit = 0
 47large_file_skip_byte_limit = 20000
 48# CPU processes to use while linting.
 49# If positive, just implies number of processes.
 50# If negative or zero, implies number_of_cpus - specified_number.
 51# e.g. -1 means use all processors but one. 0  means all cpus.
 52processes = 1
 53# Max line length is set by default to be in line with the dbt style guide.
 55# Set to zero or negative to disable checks.
 56max_line_length = 80
 59# See
 60indent_unit = space
 61tab_space_size = 4
 62indented_joins = False
 63indented_ctes = False
 64indented_using_on = True
 65indented_on_contents = True
 66allow_implicit_indents = False
 67template_blocks_indent = True
 68# This is a comma seperated list of elements to skip
 69# indentation edits to.
 70skip_indentation_in = script_content
 71# If comments are found at the end of long lines, we default to moving
 72# them to the line _before_ their current location as the convention is
 73# that a comment precedes the line it describes. However if you prefer
 74# comments moved _after_, this configuration setting can be set to "after".
 75trailing_comments = before
 77# Layout configuration
 78# See
 80spacing_before = touch
 81line_position = trailing
 84spacing_within = touch
 85line_position = leading
 88spacing_before = touch
 89line_position = trailing
 92spacing_before = touch
 95line_position = alone:strict
 98spacing_after = touch
101spacing_before = touch
104spacing_after = touch
107spacing_before = touch
110spacing_after = touch
113spacing_before = touch
116spacing_before = touch
117spacing_after = touch:inline
120spacing_before = touch
121spacing_after = touch
124spacing_within = touch
125line_position = leading
128spacing_within = touch
129line_position = leading
132spacing_within = touch:inline
135spacing_within = touch:inline
138spacing_after = touch:inline
141spacing_within = touch:inline
142spacing_after = touch:inline
145spacing_within = touch:inline
148spacing_within = touch
151spacing_within = touch
154spacing_within = touch:inline
157spacing_before = touch:inline
160spacing_within = touch
163spacing_within = touch:inline
164spacing_before = touch:inline
167spacing_before = touch:inline
170spacing_before = touch
173spacing_before = any
174spacing_after = any
177# Placeholders exist "outside" the rendered SQL syntax
178# so we shouldn't enforce any particular spacing around
179# them.
180spacing_before = any
181spacing_after = any
184# The definition part of a CTE should fit on one line where possible.
185# For users which regularly define column names in their CTEs they
186# may which to relax this config to just `single`.
187spacing_within = single:inline
189# By setting a selection of clauses to "alone", we hint to the reflow
190# algorithm that in the case of a long single line statement, the
191# first place to add newlines would be around these clauses.
192# Setting this to "alone:strict" would always _force_ line breaks
193# around them even if the line isn't too long.
195line_position = alone
198line_position = alone
201line_position = alone
204line_position = alone
207line_position = alone
210# NOTE: Order by clauses appear in many places other than in a select
211# clause. To avoid unexpected behaviour we use `leading` in this
212# case rather than `alone`.
213line_position = leading
216line_position = alone
219line_position = alone
221# Template loop tokens shouldn't dictate spacing around them.
223spacing_before = any
224spacing_after = any
227unwrap_wrapped_queries = True
230apply_dbt_builtins = True
232# Some rules can be configured directly from the config common to other rules
234allow_scalar = True
235single_table_references = consistent
236unquoted_identifiers_policy = all
239# Keywords
240capitalisation_policy = consistent
241# Comma separated list of words to ignore for this rule
242ignore_words = None
243ignore_words_regex = None
246# Unquoted identifiers
247extended_capitalisation_policy = consistent
248# Comma separated list of words to ignore for this rule
249ignore_words = None
250ignore_words_regex = None
253# Function names
254extended_capitalisation_policy = consistent
255# Comma separated list of words to ignore for this rule
256ignore_words = None
257ignore_words_regex = None
260# Null & Boolean Literals
261capitalisation_policy = consistent
262# Comma separated list of words to ignore for this rule
263ignore_words = None
264ignore_words_regex = None
267# Data Types
268extended_capitalisation_policy = consistent
269# Comma separated list of words to ignore for this rule
270ignore_words = None
271ignore_words_regex = None
274# Fully qualify JOIN clause
275fully_qualify_join_types = inner
278# GROUP BY/ORDER BY column references
279group_by_and_order_by_style = consistent
282# Aliasing preference for tables
283aliasing = explicit
286# Aliasing preference for columns
287aliasing = explicit
290min_alias_length = None
291max_alias_length = None
294# Avoid table aliases in from clauses and join conditions.
295# Disabled by default for all dialects unless explicitly enabled.
296# We suggest instead using aliasing.length (AL06) in most cases.
297force_enable = False
300# Trailing commas
301select_clause_trailing_comma = forbid
304# Consistent syntax to count all rows
305prefer_count_1 = False
306prefer_count_0 = False
309# Semi-colon formatting approach
310multiline_newline = False
311require_final_semicolon = False
314# Comma separated list of blocked words that should not be used
315blocked_words = None
316blocked_regex = None
317match_source = False
320# Consistent usage of preferred quotes for quoted literals
321preferred_quoted_literal_style = consistent
322# Disabled for dialects that do not support single and double quotes for quoted literals (e.g. Postgres)
323force_enable = False
326# SQL type casting
327preferred_type_casting_style = consistent
330# References must be in FROM clause
331# Disabled for some dialects (e.g. bigquery)
332force_enable = False
335# Comma separated list of words to ignore for this rule
336ignore_words = None
337ignore_words_regex = None
340# References must be consistently used
341# Disabled for some dialects (e.g. bigquery)
342force_enable = False
345# Keywords should not be used as identifiers.
346unquoted_identifiers_policy = aliases
347quoted_identifiers_policy = none
348# Comma separated list of words to ignore for this rule
349ignore_words = None
350ignore_words_regex = None
353# Special characters in identifiers
354unquoted_identifiers_policy = all
355quoted_identifiers_policy = all
356allow_space_in_identifier = False
357additional_allowed_characters = None
358ignore_words = None
359ignore_words_regex = None
362# Policy on quoted and unquoted identifiers
363prefer_quoted_identifiers = False
364ignore_words = None
365ignore_words_regex = None
366force_enable = False
369# Line length
370ignore_comment_lines = False
371ignore_comment_clauses = False
374wildcard_policy = single
377# By default, allow subqueries in from clauses, but not join clauses
378forbid_subquery_in = join