1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

Hungary has no legislation that is specifically dedicated to AI. However, there are numerous Hungarian laws which deal with the use of algorithms or AI (explicitly or implicitly), such as the following:

  • Data protection: As the data protection laws are technology neutral and AI operation, by definition, is based on data, these laws have a significant impact on AI.
  • Copyright: This includes the rights that parties can claim on:
    • the components of AI (eg, databases or raw materials used to train AI models); and
    • elements generated with the help of AI (eg, software, creations).
  • Data economy: The legislative framework on national data assets regulates the use of public data (including in relation to the potential use of AI).
  • E-administration: The legislative framework on e-administration contains explicit rules on electronic administration tasks and activities conducted with the help of AI.
  • Contract law: Parties may agree on rights and obligations in relation to AI projects.
  • Tort: Various forms of liability (eg, liability for hazardous activity, liability based on fault, product liability) could apply to AI by analogy; however, this has not yet been tested before the Hungarian courts.
  • Other sectoral laws: As there is no dedicated AI legislation, various sector-specific rules (eg, on pharmaceuticals, financial services, health services) may apply to AI depending on the sector in which it is used.

In line with the EU Digital Strategy, numerous proposals at the EU level – including the AI Act and the Data Act – will significantly change the Hungarian landscape once adopted.

1.2 How is established or 'background' law evolving to cover AI in your jurisdiction?

Hungary has neither established a dedicated legal framework for AI, nor any soft law in this regard. However, the National AI Strategy has set a framework for future national legislation in conformity with the applicable EU legislative instruments (see question 1.8).

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

There is a general duty in Hungary to take reasonable care in relation to all acts by any person; however, there are no specific requirements in relation to AI. Under the principle of the duty of care, one should generally act with the care that may generally be expected from a reasonable person in the relevant circumstances. This duty of care forms the basis of fault-based liability.

This principle equally applies to operators of AI technologies. For example, an operator of AI technologies may be liable if it has not adhered to its duty of care when choosing the right AI systems and/or monitoring/maintaining AI systems.

The principle of the general duty of care in the context of AI has not yet been tested before the Hungarian courts.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and 'escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

Liability for robots and other mobile AI has not yet been tested before the Hungarian courts. Nevertheless, the general law may apply to mobile AI by analogy in Hungary, especially the following:

  • Strict liability for hazardous activities: This may apply by analogy to mobile AI, meaning that an operator of mobile AI (as the party which is in control of the risks connected with the operation of mobile AI) may be liable. This means that an operator of mobile AI:
    • may be liable even in the absence of fault; and
    • may be exempted from liability only if it can prove that the damage occurred in the context of an unavoidable event that was beyond its control.
  • Product liability: Hungary has implemented the EU Product Liability Directive, and as such a manufacturer of mobile AI products may be subject to product liability. This is also a form of strict (no-fault) liability. However, at a practical level, the application of product liability in this context is very challenging, for reasons such as the following:
    • Whereas product liability traditionally focuses on the point at which a product is put into circulation, AI products are continually evolving;
    • Given the interconnectivity of AI products/systems, it is difficult to capture what exactly constitutes a defect; and
    • The 'black-box' effect of AI makes it difficult for victims to prove the defect.

1.5 Do any special regimes apply in specific areas?

Hungary has no dedicated regime for AI; but various sector-specific laws (eg, data protection, consumer protection, competition, telecommunications) may apply, depending on the context.

In terms of liability, various forms of liability (eg, strict liability, fault-based liability, product liability) could apply to AI by analogy, depending on the context. However, this has not yet been tested before the Hungarian courts and no case law has as yet developed.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

There are no bilateral or multilateral binding agreements with relevance in the AI context outside the European Union's AI legal framework.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

To date, there is no designated body that enforces AI-related rules or requirements in Hungary. Currently, enforcement operates on a sector-specific basis. The relevant bodies include:

  • the domestic courts in case of civil, labour or criminal disputes;
  • the consumer protection authorities and the Hungarian Competition Authority in cases involving consumer protection or unfair commercial practices involving AI;
  • the Hungarian Competition Authority and the European Commission in the competition law sphere;
  • the National Authority for Data Protection and Freedom of Information (DPA) for data protection cases;
  • the Hungarian National Bank in the financial sector; and
  • the Hungarian National Media and Info-communications Authority, in terms of the application of the Digital Services Act and info-communications (AI could constitute part of the info-communications network).

If a question pertaining to the regulation of AI arises, the authority/court that administers that particular question will enforce that regulation subject to its general powers.

1.8 What is the general regulatory approach to AI in your jurisdiction?

In 2020, the Hungarian government published the country's AI Strategy 2020-2030, which sets out its regulatory approach towards AI.

The strategy confirms that the regulation of AI is necessary at the national level, in conformity with the applicable EU legislative instruments. Among the main areas for regulation, it highlights:

  • the framework for regulating data assets;
  • the creation of a comprehensive AI regulatory environment (including rules of registration, AI technology related legal entity, liability/responsibility and industry-specific rules); and
  • the adoption of industry ethical standards.

From the authorities' side, thus far only the DPA has explicitly dealt with a case involving the use of AI. In that particular case, the DPA imposed a fine of HUF 250 million, which suggests that it has adopted a relatively strict regulatory approach when it comes to assessing the operation of AI in compliance with the data protection regulations.

Additionally, the Hungarian National Bank has deployed a regulatory sandbox for fintech companies in order to provide a safe harbour for testing and impact assessment.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

Several initiatives are using AI or paving the way for its future use. The main technologies typically include:

  • chatbot-based customer services;
  • precision agriculture applications;
  • predictive maintenance systems;
  • fleet route optimisation programs;
  • inventory forecasting; and
  • medical diagnostics (in particular, cancer screening).

Surgical robots are also in operation in several hospitals.

There are frameworks in place that should help to underpin the future deployment of AI. Examples include:

  • a test track for autonomous vehicles;
  • an integrated health dataset; and
  • a central identification service for public administrations.

Facial recognition systems and machine vision/image analysis solutions are also applied. In the logistics systems of some factories, AI continuously monitors order and stock levels, including shipments that are still in transit. These systems detect problems in the supply chain and suggest alternative routes or rescheduled deliveries. AI also controls the automated process of loading trucks to optimise the use of loading space. Workstations equipped with visual image processing supported by AI also support quality assurance tasks during production.

However, the percentage of companies currently using AI in Hungary remains extremely low, at only 3%. This figure is higher – above 20% – if only Internet of Things applications are taken into account (according to Eurostat).

The use of AI in legal services is less common. In the services arena, the financial and insurance sectors are making strong use of AI-based solutions.

2.2 What AI-based products and services are primarily offered?

Primarily, the following AI-based products and services are offered in Hungary:

  • The AI products of Big Tech multinationals are also available on the Hungarian market.
  • The government-initiated and state-funded Artificial Intelligence Coalition is using the Machine Intelligence Designer platform to help industrial engineers to develop deep learning solutions for machine vision and time series analysis problems. Marketed products include AI-enabled communication assistants, voice imaging services and voice transcription services.
  • In the banking sector, AI-based software helps to detect fraud, ensure compliance with anti-money laundering legislation and manage risk.
  • AI-based technology for dermatology has been launched, making it possible for patients to diagnose skin diseases from the comfort of their own homes. This not only simplifies patients' lives, but also allows a doctor to treat up to 40 cases in an hour, increasing the efficiency of care. The technology is also being treated as the first public digital hospital in the European Union.
  • Developments are significantly linked to digital transformation ambitions of industrial companies (eg, self-driving vehicles).
  • In the labour market, AI is used to evaluate job applications.

2.3 How are AI companies generally structured?

From a legal perspective, AI companies have no specific peculiarities. As regards the ownership structure, AI companies are usually start-ups, established with a well-defined goal of developing and marketing AI-driven products and services. The typical structure is a company with a small number of shareholders, a few of whom contribute to the professional output of the company, while the others provide the necessary funding to finance research and development (R&D).

2.4 How are AI companies generally financed?

AI companies are typically start-ups whose shareholders are generally investment companies of financial institutions or other entrepreneurs that provide the necessary financing for R&D. The state also provides grants for developing AI which may be used in partnerships between industrial players and research institutes (universities).

2.5 To what extent is the state involved in the uptake and development of AI?

The Hungarian state is playing an active role in thedevelopment of AI. The Hungarian government announced its AI Strategy in 2020. The role of the state is that of regulator rather than investor. The state allocates budgetary resources to subsidise the development of AI, but not as a market investor. It is also attempting to deploy AI in the public administration and other state activities as far as possible.

The Hungarian state considers the development and application of AI as a competitive advantage. It has therefore launched a broad programme of data economy development, application deployment and technology building.

The government is also:

  • introducing the use of AI technology into the services provided by the state; and
  • establishing a framework for the responsible development and use of AI.

In this context, it aims to:

  • develop and promote responsible data asset management;
  • modernise its own processes; and
  • prepare for data and AI governance, with a particular focus on health sector developments and maintaining security.

A further priority for the government is to promote the use of AI by small and medium-sized enterprises (see question 10.2).

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

The state is focusing on regulatory issues and on increasing the accessibility of health data. Making available and exploiting health data assets through modern infrastructure is an explicit regulatory objective. This requires an appropriate regulatory environment. The clear objective is to create the necessary infrastructure to support the use of health data assets. The regulatory environment is intended to facilitate the use of secondary health data.

(b) Security and defence

In the areas of security and defence, the regulatory and public administrative framework designed for automated systems will apply.

Specific security-related areas of focus include:

  • the development of border control systems and complex identification systems;
  • data-driven law enforcement and crime prevention using complex analysis;
  • the introduction of existing AI technologies into the investigative process;
  • AI-based mapping of offender contact networks.

Specific defence-related areas of focus include:

  • the automation of big data processing, information operations and decision-making systems;
  • the implementation and development of predictive supply systems;
  • the development of autonomous systems in all relevant operational domains (airspace, surface, space, cyberspace);
  • the development of human-machine interaction on both sides;
  • protection against AI-supported systems in all relevant operational spaces (including modelling and simulation); and
  • developments aimed at protecting and analysing the defence-related elements of national data assets.

(c) Autonomous vehicles

The Hungarian regulatory environment allows for flexibility in the conduct of test operations for self-driving vehicles. The government has made the promotion of such developments an explicit public objective by providing appropriate test tracks and an innovation-friendly legal environment.

(d) Manufacturing

In the manufacturing area, an explicit aim is to create a test environment for the analysis of manufacturing data, in order to:

  • facilitate manufacturing-related data management;
  • develop cybersecurity and data protection in manufacturing;
  • implement data standardisation protocols to facilitate data analytics in manufacturing; and
  • introduce manufacturing data to the data marketplace.

To increase the efficiency of manufacturing and promote the development of new manufacturing processes, it is necessary:

  • to centralise research and match it to industrial needs; and
  • to set up an innovation ecosystem (to be undertaken by the future AI National Laboratory).

Short-term areas of focus: These include:

  • parameter control of production processes;
  • manufacturing decision support;
  • quality control with AI tools;
  • online product testing;
  • layout and process simulation;
  • factory optimisation;
  • predictive maintenance;
  • high-accuracy indoor and outdoor positioning systems with 5G and AI;
  • robot control support with AI solutions;
  • artificial vision manufacturing applications;
  • open production IT architecture; and
  • manufacturing in the city.

Medium-term areas of focus: These include:

  • AI use in 6G networks;
  • after-sales product tracking;
  • AI-based data processing;
  • service demand estimation and forecasting;
  • drone management in the industrial domain (sample factory, sample area);
  • automated management of critical machine-to-machine communication;
  • extensive use of Internet of Things devices and private communication devices in the industrial domain (sample area);
  • supply chains;
  • product tracking;
  • optimisation of manufacturing logistics;
  • optimisation of manufacturing energy management; and
  • manufacturing cybersecurity.

With regard to the small and medium-sized enterprise (SME) sector, which is a key engine of the Hungarian economy, there is a need to implement digital transformation projects to ensure that manufacturing SMEs can remain competitive.

(e) Agriculture

The aim is to implement and disseminate AI technologies in line with the digital transformation of the agricultural sector. Agriculture-related focus areas in the AI Strategy include:

  • the development of the Agro-Data Framework by creating a cloud-based data information platform that allows producer (farm-level) and government data related to agriculture to be recorded, processed and stored in a uniform, structured way;
  • the establishment of a Digital Agro-Innovation Centre to develop a digital innovation ecosystem and incubate start-ups using AI technologies. This will include the creation of a testing ground for innovation and testing of robots based on the use of AI technology;
  • revision of the regulations on the use of drones and autonomous machines in the agriculture sector; and
  • the development of a crop forecasting service.

(f) Professional services

Professional services as such are not currently the focus of legislation. In highly regulated sectors such as financial services and insurance, the supervisory authority plays a key role in promoting, implementing and controlling AI-driven technologies. The same applies to the implementation of AI-driven technologies in the public administration. As for other professional services, there is no relevant specific national regulation and none is expected in the near future.

(g) Public sector

With regard to both public administration and case management, the focus is on developing automatic decision making and automatising processes as far as possible.

(h) Other

Other AI-related developments include the following:

  • Energy: The aim is to utilise data assets in the energy sector in the best possible way and to develop personalised services as a result. Among other things, developments include:
    • the rollout of smart meters;
    • smart grid development;
    • the development of data-driven energy market models;
    • predictive maintenance;
    • autonomous operation; and
    • the development of smart energy supply and optimisation systems.
  • Banking/insurance: Many AI-related projects have been already implemented in these sectors, such as:
    • automatic email responses using language processing;
    • support of credit analysis;
    • identification by analysing transaction patterns;
    • preliminary processing of incoming claims; and
    • modelling of possible damage events.
  • Telecommunications: Several AI-related projects have been already implemented in this sector, including:
    • automated customer service with phonebots/chatbots;
    • forecasting of failures in the network infrastructure; and
    • calibration of network coverage by applying self-learning antennae.

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The major laws in the data protection field are:

  • the General Data Protection Regulation (GDPR); and
  • Act CXII/2011 on the Right of Informational Self-Determination and on Freedom of Information ('Data Protection Act').

In order to implement the GDPR and the EU Law Enforcement Directive (2016/680), the Data Protection Act was completely amended in July 2018. It now contains three groups of provisions:

  • additional procedural and substantive rules on data processing which falls under the scope of the GDPR;
  • rules on data processing which does not fall under the scope of the GDPR; and
  • rules on data processing for law enforcement, national security and national defence purposes.

As both the GDPR and the Data Protection Act are technology neutral, their provisions apply to AI companies equally. Further, both legislative instruments contain dedicated provisions on automated decision making and profiling that involve huge datasets and algorithmic AI software.

In general, the GDPR provides that data subjects have the right not to be subject to automated decision making. AI companies may also be required to conduct data protection impact assessments.

AI companies should plan well in advance to implement data protection principles (eg, data minimisation, purpose limitation, transparency and accuracy) prior to the implementation of new AI solutions in line with the principle of privacy by design (rather than treating this as an afterthought).

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The EU cybersecurity regime applies, including:

  • the EU Network and Information Security Directive (2022/2555/EU);
  • the cybersecurity provisions of the EU telecommunications regulations; and
  • the GDPR.

Additional cybersecurity rules apply for public entities that fall under the scope of the Data Protection Act (eg, law enforcement authorities), under the Data Protection Act and for entities that fall under the scope of Act L/2013 on the Information Security of State and Municipal Bodies (including state and municipal bodies and service providers of critical infrastructure).

There are no specific implications for AI companies; however, the National Authority for Data Protection and Freedom of Information (DPA) has prioritised cybersecurity and data breach management supervision in recent years. For example, in May 2020, the DPA imposed a GDPR fine of HUF 100 million on a Hungarian telecommunications company when an ethical hacker reported a security vulnerability to the company. Cybersecurity risks are particularly high at AI companies that manage large data sets, potentially including personal data, so attention should be devoted to this issue – especially in light of the DPA's enforcement practice.

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

The application of AI presents numerous competition law related challenges/concerns, such as the following:

  • the close vertical interrelationships between companies dealing with AI;
  • robust barriers to entry;
  • difficulties in accessing resources for AI (eg, large datasets);
  • challenges relating to interoperability and data portability;
  • the high level of business secrecy; and
  • the cost and deployment of licensing regimes.

In short, the benchmark for entry to the AI market is very high.

Hungary has no national legislation addressing this issue, as competition law is highly harmonised across the European Union. The Digital Markets Act covers core platform services – including virtual assistants, cloud computing and online intermediation services – that largely depend on algorithms and AI systems. Among the main obligations regarding data management, core platform service providers designated as gatekeepers:

  • may not use any data provided by business users to adjust their own AI offerings; and
  • must provide business users with access to the data generated by their activities on the gatekeeper's platform.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

There are several employment-specific challenges, including in relation to:

  • transparency;
  • bias/discrimination;
  • delegation of employer tasks to AI;
  • autonomous decision-making; and
  • liability.

Hungarian law does not specifically address these challenges. However, much can be concluded from the general principles of Hungarian employment law and data protection laws. Also, the EU Proposal on the Platform Work Directive aims to regulate employment relationships where many of the employees' obligations and conditions are set by algorithms.

From the employment perspective, some of the key takeaways include the following:

  • An employer must notify an employee or job applicant about its use of AI tools.
  • If the use of any AI tool restricts the privacy rights of employees, such usage is only allowed if it passes the "necessity and proportionality" test.
  • Human oversight must be ensured: a person must be responsible for the final HR decision.
  • The employer may be required to carry out a data protection impact assessment.
  • The employer must implement proper policy/measures to prevent employee bias/discrimination (eg, seeking data points outside the existing organisation; ensuring that sensitive characteristics are not the decisive factor).
  • Employees have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them.
  • The works council must be consulted prior the implementation of AI tools.
  • The employer must state in its internal policy whether the use of AI tools is banned or permitted for the completion of the employee's tasks and who is responsible for the results of such tools.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

AI presents a risk of potential bias and discrimination if its use is based on manipulated or outdated training sets of data. On the other hand, if the data set is not sufficiently representative, the AI will be prone to reproduce the same logic in different situations, leading to possibly discriminatory decisions.

AI companies must ensure that AI solutions are built in a robust way, to avoid manipulation or inconsistency in their predictions. Attention should be paid to this in both design and deployment, while continuously monitoring the training sets. The other side of the coin involves the cybersecurity resilience of the AI system, as outlined in question 4.2.

In general, data manipulation and integrity in the context of AI are not specifically addressed in Hungarian law, apart from the GDPR's general principles of accuracy, integrity and accountability.

In this context, the Hungarian AI Strategy aims to create a 'data marketplace' with the possibility of certification to confirm the trustworthiness of AI companies' data sets.

8 AI best practice

8.1 There is currently a surfeit of 'best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

Hungary has not yet developed national best practice guidance on AI. However, it is possible that this will be developed in the future, as the Artificial Intelligence Regulation and Ethics Knowledge Centre has been set up with the aim of resolving legal issues and matters of ethics relating to AI regulation.

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

There is no 'one size fits all solution' for all companies. However, to ensure a well-crafted AI best practice, a company should:

  • conduct risk impact assessments and testing across the entire AI operational lifecycle, from design to implementation;
  • ensure that basic principles are embedded in the AI operation (eg, transparency, fairness, non-discrimination, human supervision, robustness);
  • ensure proper data governance (including data quality control/certification, registration of datasets);
  • develop an independent internal AI ethics committee;
  • provide internal AI-related training;
  • address the needs of different stakeholders; and
  • understand the peculiarities of the local legal and business environment.

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

To implement the right internal processes in relation to AI, it is important to:

  • devise a tailormade action plan with clear targets;
  • establish a dedicated AI team with the right allocation of tasks/responsibilities; and
  • set the tone at the top of organisation to promote a culture of ethical AI operation.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

AI can be utilised as part of the contracting process, especially when it comes to the conclusion of a great number of contracts, as in the case of consumer transactions. Sophisticated AI may assist consumers in making choices; but it may also be a tool of pressure and manipulation which is difficult to detect. This primarily involves issues relating to:

  • pre-contractual liability;
  • the duty of disclosure;
  • the duty to cooperate; and
  • in consumer contracts in particular, fairness.

In specific sectors, such as financial services and insurance, the application of AI in the contracting process (eg, by credit scoring) is a critical issue which helps significantly in risk assessment, but also raises issues of liability.

AI makes it possible for big commercial platforms to:

  • monitor consumer feedbacks and complaints;
  • detect performance problems suffered by sellers on the platform;
  • promote products through the platform; and
  • influence consumer choice through the platform.

This suggests that the position of such commercial platforms is different from that of auction houses, and that they may have greater exposure to liability scenarios.

The contracting party normally has little chance to reduce these risks, because he or she does not know the algorithms that are used by the other party. Especially in monopoly positions or in consumer contracts, even if the algorithm is transparent, this does not help in properly addressing the unequal bargaining position. However, such risks may be mitigated through mandatory rules in consumer law, notifying the consumer that the service is personalized by algorithms and also by shifting the burden of proof or risks to the party that is utilising the AI.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

The primary risks include:

  • the opacity of the causal link resulting in the loss;
  • the large number of potential victims or large volume of losses to be compensated; and
  • the difficulty of identifying the damaging conduct.

There may also be some liability gaps. Often, the damage that occurred or the interference with protected rights is difficult to detect because it remains hidden. As the loss is often a result of human-machine or machine-machine (software-software) interaction, or lies in the data, the risk cannot be predicted by the potential tortfeasor; and the actual tortfeasor (or tortfeasors) cannot be identified.

In the context of tort liability, the victim normally has little chance to mitigate the loss. The loss can be mitigated by:

  • compliance with ex ante regulatory measures (if applicable);
  • validation by the producer (or the service provider); or
  • adherence to ethical standards and best practices (if any).

The Hungarian system of liability in tort provides:

  • further ways to shift the liability to the producer or to an operator of AI systems instead of the actual wrongdoer; and
  • ways to reverse the burden of proof as to fault or the causal link.

Thus, the structure of tort law and the rules of evidence in civil procedure can achieve an allocation of loss to those who benefited from the activity causing the damage.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

Discrimination is among the biggest social risks associated with AI technology. From a legal and ethical perspective, there may be considerable disagreement in society as to the kinds of factors taken into account in the course of decision making that could result in discrimination. From a legal and ethical perspective, a choice is discriminatory if it results in social exclusion, even if this is statistically justified.

The issues in Hungary are no different from those in other European societies. Discrimination may result from the biased data used to teach the algorithm or from the algorithm itself.

These risks can be mitigated by, among other things:

  • ensuring human supervision;
  • applying data sets which are sufficiently representative;
  • testing and validating AI systems (including the datasets); and
  • keeping adequate records (eg, programming, training methodologies and techniques used to build, test and validate AI systems), so that AI decisions can be traced back.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

AI innovations are generally protected in Hungary under IP and trade secret laws primarily as follows:

  • Copyright: AI technologies and their outputs can be protected by copyright if they meet the requirements of the author's work (ie, the bottom line remains that only human-made contributions are eligible for copyright).
  • Sui generis database right: AI databases can be protected under the sui generis database rights of the database producer if it can be demonstrated that the database producer has made substantial financial, material or human efforts in order to prepare and use the contents of the database.
  • Trade secrets: AI can also be protected under the rules on trade secrets, as long as the AI technology holder implements appropriate technical and organisational measures to keep it confidential.

10.2 How is innovation in the AI space incentivised in your jurisdiction?

Promoting the use of AI by small and medium-sized enterprises (SMEs) is a priority for the government. Start-ups are supported by the state through:

  • the provision of open data sets to support their development;
  • the development of a network of early adopter partners;
  • the development of AI-specific accelerators;
  • the development of AI-specific investment funds; and
  • sector-specific support.

The Hungarian government aims to:

  • promote experimentation;
  • build AI marketplaces;
  • encourage development through AI innovation prizes; and
  • support participation in university research projects.

Apart from this general framework, there are several initiatives that aim to promote innovation in the AI space:

  • The Artificial Intelligence National Laboratory was established to fund AI-related research with a total of HUF 12 billion available until 2025.
  • The Artificial Intelligence Coalition runs an online marketplace for AI providers, helping them to launch new AI projects. It has also established an accelerator centre where small and SMEs can apply to receive support in developing their business and communication streams with the help of AI.
  • The Hungarian National Bank has deployed a regulatory sandbox for fintech companies to provide a safe harbour for testing and impact assessment.
  • Hungary is participating in the European Information Technologies Certification Academy programme to officially recognise AI experts.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

As a general principle, all employers must respect the ban on discrimination. AI is often used in the talent recruitment process – especially in sorting applications or filtering applicants according to given criteria. Consequently, AI systems should run on a non-discriminative basis, ensured by quality raw data; companies must thus train and test such systems properly before incorporating them into HR processes.

There are also some other specific implications for AI companies and companies using AI in the talent acquisition process:

  • If the use of an AI tool in the talent acquisition process restricts the privacy rights of employees / applicants, such usage is only allowed if it passes the "necessity and proportionality" test.
  • Human oversight must be ensured: a person must be responsible for the final HR decision.
  • A data protection impact assessment is usually required before AI tools are utilised in the talent acquisition process.
  • Employees have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them.
  • AI tools must collect only that data which is relevant to recruitment/employment purpose and must avoid collecting sensitive personal data.

11.2 How can AI companies attract specialist talent from overseas where necessary?

AI companies can attract specialist talent through Hungary's Digital Nomad Programme – a fast-track, simplified route to applying for a Hungarian White Card. This type of residence permit may be issued for up to one year and can be renewed for a further year.

To apply for a White Card, the applicant must pursue a foreign gainful activity – that is, he or she must own his or her own remote foreign business or work for a company located outside Hungary. A continuous stay in Hungary is not a requirement, but leaving Hungary for more than 90 days could result in withdrawal of the White Card. A minimum monthly income of €2,000 is required.

The process for obtaining a White Card is not overly complex. In general, the applicant needs:

  • a valid passport;
  • a Hungarian lease agreement;
  • insurance covering Hungary;
  • documents supporting his or her financial background; and
  • proof of pursuing foreign gainful activity.

The immigration authority will decide on the application within 30 days of receipt.

If the applicant plans to stay in Hungary for a longer period, he or she can also apply for an intra-company transfer permit (valid for a maximum of three years) or an EU Blue Card (valid for a maximum of four years) under certain conditions.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

The Hungarian government is making significant investments in AI research and development. In addition, the goals of the Hungarian Artificial Intelligence Strategy include supporting AI start-ups by developing specific AI accelerators, investment funds and incubators. Since the introduction of ChatGPT in March 2023, the attention has turned towards generative AI.

No specific national legislative reforms with regard to AI are expected in Hungary in the next 12 months.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

For AI companies seeking to enter the Hungarian market, the top tip is to devote the necessary attention to ensuring legal compliance. As Hungary is an EU member state, it shares the European Union's general pro-regulatory enthusiasm, in contrast to the more laissez-faire approach found overseas. As the regulatory framework for AI products and services is still in a preparatory phase, AI companies should follow the general principles (eg, fairness, transparency, security, accountability) emphasised in existing legal instruments to prepare themselves for future regulation. It is also advisable to calculate the cost implications of compliance, because these may vary from the respective amounts in other jurisdictions.

It already seems clear that any activity involving AI will require robust data protection measures. The National Authority for Data Protection and Freedom of Information takes a strict approach towards this issue: it recently imposed an unprecedented fine of HUF 250 million on a bank for using AI software in breach of the GDPR requirements.

AI companies shall be especially careful about the following sticking points:

  • product development without appropriate legal support (all legal issues shall be carefully thought over prior to implementation);
  • application of vague set of principles (e.g. AI companies can apply the ban on discrimination only if they understand the social and legal meaning behind such provisions);
  • fitting AI into different compliance environments, where the application of AI has not yet been tested (e.g. how a company can use AI in full accordance with ESG expectations); and
  • allocation of liability/ responsibility between different AI stakeholders.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.