Squidex is shipped with a powerful management UI based on state of the art frontend technologies to make the daily work for content editors as easy as possible.
The following features stand out:
Each content field has a type, such as a text or number. Depending on the type, Squidex provides a wide range of editors. Text fields support single line text, multiline text, rich text, markdown editors and much more. The same is true for other field types as well.
If the builtin editors do not fulfill your needs you can integrate Squidex with a custom editor. This option is available in the self hosted version as well as in the cloud. Custom editors run in a sandbox and can never cause failures on the management UI.
In contrast to many competitors we have decided to use a dedicated listview per content type. This provides a lot of flexibility, because you can decide which fields you want to show in the content overview.
Content fields can be marked as inline editable. If a content type has many records that need to be updated very frequently, this can be done in the content overview. There is no need to navigate to each content idem and do the update there, which is a big time saver.
Squidex provides a powerful asset management solution.
This includes the following features:
Assets can be managed in folders to organize them in the way the content editors are already used when working with traditional file systems.
When an asset gets uploaded it will be annotated with tags automatically to organize them on an additional dimension and to find all assets with common tags across all your folders.
Squidex has a powerful metadata system that supports all major image, audio and video formats and extracts the metadata from the files to make them available for your content editors. Metadata can also be changed and used in filters, for example to get all videos with a certain length.
Assets are usually identified by a randomly generated identifier and you need this identifier to find and download the assets. If an additional level of security is needed they can also be marked as protected. Only authorized users with the right permissions can download the asset then.
If Squidex is hosted on custom servers several options are available where the assets should be stored. One of the following storage systems can be chosen: Amazon S3, Azure Blob Storage, Google Cloud Storage, FTP, File System or MongoDB.
Assets also support versioning. Even if an asset gets replaced with a newer version of the file the previous versions are still accessible.
When a backup of your project is created, the asset files will be included in the backup and also restored. This also includes previous versions of your assets.
Today companies operate worldwide and many content items are provided in multiple languages, especially in Europe where a lot of different languages are spoken. Therefore Squidex was built from the start for the globalized world.
Each project supports an unlimited number of defined languages, also if you start with the free plan in the Squidex cloud.
It is a decision per field if you want to support localization. Very often you have mixed content. If you manage products in Squidex you often want to have a common URL and name across all languages but a localized description.
Content fields can be marked as required. Then a content editor needs to enter a value for this field before the content item can be saved. This can be a problem when a new market is about to be entered and not all texts are available yet. Therefore a language can also be marked as optional, which turns off the required validation for this language.
Very often texts are not available in all languages. Therefore you can define a list of fallback languages per language. If a content value is requested in a certain language and it is not available yet, the fallback languages are used in the defined order to find a suitable alternative value.
We built the backend on top of a principle called Event sourcing. This means that all changes in the system are recorded as a sequence of events and stored permanently and forever in the database. This means that you can always reason about who has changed something and you can also use the events to react to changes.
This principle enabled several features that would be very difficult to implement otherwise.
The sequence of events provide a reliable audit log. An audit log records all changes in the system and provides endpoints and user interfaces to read them. Very often an audit log is built as a secondary data source and not used to reason about the state of the system internally. Therefore there is no guarantee that the audit log is actually correct. With event sourcing the events are the source of truth and it is guaranteed that they are correct.
Because we store all events that have ever happened in the system we can also restore the state of a content item for any given point of time. Therefore you can load previous versions of your content and assets and compare them with the current state to understand what has been changed over time.
When we make a backup we do not add the current state to the backup file. Instead we write all events that belong to your project. This means that when you restore your backup you keep the full history of all changes and you can still keep track of all changes and move back to previous versions of a content item.
The event system also allows us to react to the changes. This is used in our integration system which enabled it to handle the most important events that happen inside the system.
Our rule system enables it to react to changes and to push your data to other systems. This can be used to synchronize your data with other software and to automatic your workflows.
To define when something should happen you can create a trigger. For example you can create a rule that is triggered whenever a content item or an asset is changed. But very often you are only interested in very specific events, for example when a blog post is published. Therefore you can use javascript expressions to define when exactly something should happen.
Actions define what should happen when a rule is triggered. We support a wide range of actions. For example you can create slack notifications, publish tweets or synchronize your data with Algolia full text indices.
We also provide a command line interface (CLI) to automate tasks. It can be used to export import your content to CSV or to keep several projects in sync. The CLI can also be used by a system administrator to configure automatic daily backups.
Data that just sits in Squidex is not useful. Therefore we have built an advanced filtering and query system to provide you the data you actually need and not more:
Use GraphQL to get API responses that are tailored to your requirements and do not include any extra information or content fields you do not need. You can also resolve other content items or assets that are linked to your content item get them with a single API call.
We provide an outstanding filtering and sorting system with two alternative syntaxes based on OData or JSON. Filtering is possible on almost all content fields and meta fields and a wide range of comparison operators are supported. Furthermore complex filters with AND, OR and NOT operators can be used. You can also use sorting on one or multiple content fields and use pagination.
All content items are indexed by a full text system. Stopwords are automatically excluded based on the language of the content. The system also supports approximate matching to search words that match approximately and not exactly, for example when your content and search text has spelling errors.
We know that large organizations have complicated workflows to ensure that only high quality content goes out to the public and clients. Therefore we have built an advanced system to support your editorial workflows.
A workflow is defined by number of stages. Each content item can be in exactly one of these stages. The Published stage is mandatory and means that the content is accessible for your mobile apps, websites or other services. But you can define as many stages as you want and also the transitions between them.
A content item can only be moved from one stage to another stage when there is a transition between stages. These transitions can also be restricted to user groups and updates can be prevented depending on the stage.
The backend is built on a distributed actor system from Microsoft called Orleans. This system was built to support the high performance needs of modern multiplayer computer games such as Halo.
By leveraging Microsoft Orleans we can offer a few advantages:
Orleans was built for clusters with several thousands of servers. The load is distributed automatically between your cluster members based on the CPU and memory usage of your servers.
Background jobs are also distributed automatically in the cluster. Therefore there is no need to setup a dedicated machine for the background jobs. When a cluster member goes down the background tasks are automatically transferred to other members.
A lot of data is handled in memory and distributed and shared within the cluster. Therefore most operations only require less than 5 calls to the database, very often even less. This keeps the load on the database servers low and guarantees best performance.
Due to the nature of Orleans there is no caching needed and you do not have to deploy a dedicated cache server such as Redis or memcache.